Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Next Generation Stack Computing

CmdrTaco posted more than 8 years ago | from the cpu-of-the-fuuuuture dept.


mymanfryday writes "It seems that stack computers might be the next big thing. Expert Eric Laforest talks about stack computers and why they are better than register-based computers. Apparently NASA uses stack computers in some of their probes. He also claims that a kernel would only be a few kilobytes large! I wonder if Windows will be supported on a stack computer in the future?"

cancel ×


Sorry! There are no comments related to the filter you selected.

Twelfth of Never (5, Funny)

Tackhead (54550) | more than 8 years ago | (#15883677)

> He also claims that a kernel would only be a few kilobytes large! I wonder if Windows will be supported on a stack computer in the future?"

In Redmond, 640 bytes isn't enough for anybody.

Re:Twelfth of Never (2, Insightful)

jellomizer (103300) | more than 8 years ago | (#15883818)

But in reality this would be a Major Redesign of the OS, and all Apps would need to be recompiled/emulated. Registers are a core part of assembably language. Having to remake Windows would be like making Windows for the Power PC, If not more difficult.

Re:Twelfth of Never (4, Insightful)

real_b0fh (557599) | more than 8 years ago | (#15883918)

actually, if windows is 'done right' all it would take is a recompile. I don't think there is a lot of assembler code in the windows source that needs to be rewritten, most code will be in C or C++.

Re:Twelfth of Never (0)

Anonymous Coward | more than 8 years ago | (#15883936)

actually, if windows is 'done right' all it would take is a recompile. I don't think there is a lot of assembler code in the windows source that needs to be rewritten, most code will be in C or C++.

As a former Microsoft employee, all I can say is...


Re:Twelfth of Never (1)

OrangeTide (124937) | more than 8 years ago | (#15884117)

I've ran C code on stack computers before. Now what's harder is running on a stack-less computer. register-based cpus have some form of stack typically, but some microcontrollers don't have enough space for a call stack or they have some unusual limitation (like no CALL/RET instructions).

Computer-Science Motto: Back to the Future (0)

Anonymous Coward | more than 8 years ago | (#15883951)

The motto in computer architecture and any other field of art is "Let's go back to the future and see whether any old ideas have traction." Here are some examples of u-turns in art.

1. stack-based computers -> register-based computers -> stack-based computers

2. virtual machine monitor -> operating system (e.g., MS-DOS, Unix, and Windows) directly on top of the hardware -> virtual machine monitor

3. dumb terminal -> personal computer -> thin client

4. Al Capone's favorite car -> Chrysler LeBaron -> PT Cruiser

5. 1967 Camaro with aggressive, muscular form -> 1982 Camaro with slick, crack-cocaine form -> 2009 Camaro [] with aggressive, muscular form

6. 1960's "Mission Impossible" -> lots of boring TV-series/theater-movies -> 1990's "Mission Impossible"

7. 1960's "Bewitched" -> lots of boring TV-series/theater-movies -> 2000's "Bewitched"

8. 1970's "Brady Bunch" -> lots of boring TV-series/theater-movies -> 1990's "Brady Bunch"

In art, what goes around comes around. Note that I said, "computer architecture", not "computer science". Computer science is real science. Computer architecture is not. It is art. Just see the 8 items in the above list.

Re:Computer-Science Motto: Back to the Future (2, Funny)

Andrew Kismet (955764) | more than 8 years ago | (#15884114)

I cannot consider your post valid, as you've claimed that 2000's "Bewitched" was 'art'...

All your architecture are belong to us (-1, Offtopic)

davidwr (791652) | more than 8 years ago | (#15883679)

first post?

There goes my karma.

Re:All your architecture are belong to us (-1, Offtopic)

neonprimetime (528653) | more than 8 years ago | (#15883754)

Why do you ask? I hate it when f@!#ing losers ask, am I first post? Dont' ask, say it like you mean it. I am first post. I am awesome. I want people to notice me. I am important. First post is my friend. Just stop asking, "ooohh, did I get first post?" Instead, just say it, "I'm better than everybody else cause I'm going to get first post!"

Re:All your architecture are belong to us (0, Offtopic)

ahsile (187881) | more than 8 years ago | (#15883777)

Damnit, try and make a half-ass witty remark... and then add first pr0st at the end. Something you speculate about the artcile, which will get you modded high at first, and by the time anyone reads the article and realises you were wrong, it's too late.

This way you get first post, and karma whore at the same time. You will soon learn, my pupil.

I Know... (-1, Flamebait)

pengolodh (698223) | more than 8 years ago | (#15883690)

I really should RTFA, but it isn't going to easy. Could you just read it for me?

Re:I Know... (0)

Anonymous Coward | more than 8 years ago | (#15883710)

It say blah blah blah here's some videos to torrent.

Re:I Know... (1)

SnowZero (92219) | more than 8 years ago | (#15883849)

do you have a mirror for that? I'm too lazy to look.

How do I join GNAA? (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15883702)

I ask because I am neither

Assembly Code was fun (1, Informative)

neonprimetime (528653) | more than 8 years ago | (#15883708)

Sounds like fun. I remember writing in RISC assembly code. I fealt like a real man back then. C# just doesn't cut it.

fyi - The Open Office format Slide links don't work, so sadly I had to open the PPT file in Open Office instead.

Re:Assembly Code was fun (4, Funny)

hal2814 (725639) | more than 8 years ago | (#15883972)

RISC assembly code? That's so weak. I'd rather spend a day writing an assebmly routine that has an equivalent single obscure machine instruction I didn't know about beforehand, thank you very much.

Oh? (3, Funny)

qbwiz (87077) | more than 8 years ago | (#15883713)

I thought the 387 and Burroughs B5000 were odd, antiquated architectures, but apparently they're the wave of the future.

Re:Oh? (1)

Rob T Firefly (844560) | more than 8 years ago | (#15883757)

Everything old is new again! *kick-starts the old ENIAC*

Re:Oh? (1)

HiThere (15173) | more than 8 years ago | (#15883938)

ENIAC's a great machine. All you need to do is re-implement it in nano-technology. Unfortunately, it's analog, so you won't be able to translate C to run on it.

Re:Oh? (2, Funny)

Rob T Firefly (844560) | more than 8 years ago | (#15884065)

I had a good C interpreter ported over to punch cards, but one day I accidentally dropped crate #147 off the forklift and they went everywhere. Damn my lazy habit of not labelling my media!

I should be finished unshuffling them in another six or seven months.

Re:Oh? (0, Redundant)

cnettel (836611) | more than 8 years ago | (#15884156)

ENIAC was digital. That, and electronic, and programmable (and Turing complete).

Don't forget the classic HP3000 (1)

Nick Driver (238034) | more than 8 years ago | (#15883929)

Don't forget the venerable HP3000 "Classic" [] machines like the Series 68 and 70 machines.

Re:Oh? (0, Offtopic)

mclaincausey (777353) | more than 8 years ago | (#15883961)

TheFNORD! last time I wrote anyFNORD!thing in a stack-based languaFNORD!ge it was silly little progrFNORD!ams in Reverse Polish LiFNORD!sp on an HP-48.

Re:Oh? (1)

OrangeTide (124937) | more than 8 years ago | (#15884152)

It's like how bellbottoms where poised to make a comeback in the early 1990s. It was just a big scam cooked up by the fashion industry because they were out of ideas (again).

Also Java JVM is a stack architecture, and we have lots of microcontrollers that run JVM natively. so basically you are all way behind the times on this "stack cpu fad"

wikipedia link (5, Informative)

whitelines (715819) | more than 8 years ago | (#15883727)

I didn't know either: []

Re:wikipedia link (0)

Anonymous Coward | more than 8 years ago | (#15883832)

Damn, when I checked out the Wiki site, it said

"In computer science, a stack machine is a model of computation in which the computer's memory takes the form of one or more horse penises."

Good old Wikipedia!

Re:wikipedia link (1, Offtopic)

The MAZZTer (911996) | more than 8 years ago | (#15883864)

Some idiot changed "Turing machine" to "Penis machine". But then someone else changed it back before I could... good ol' wikipedia indeed.

Re:wikipedia link (0)

Anonymous Coward | more than 8 years ago | (#15884086)

Odd, shouldn't a stack(ed) machine be related to large boobs rather than penises?

A well hung machine?

Re: transputer wikipedia link (2, Interesting)

Kevster (102318) | more than 8 years ago | (#15884105)

I'm surprised no one's mentioned the transputer [] .

Re:wikipedia link (0)

Anonymous Coward | more than 8 years ago | (#15884130)

A better Wiki reference for stacked machines: []

Bit-torrent (1)

KingEomer (795285) | more than 8 years ago | (#15883741)

Well, time to test out the CSC's new torrenting capabilities.

Size and functionality (4, Insightful)

Angst Badger (8636) | more than 8 years ago | (#15883742)

He also claims that a kernel would only be a few kilobytes large!

I've seen sub-1k kernels for FORTH systems before. The question is, how much functionality do you want to wrap into that kernel? More capable kernels would, of course, be correspondingly larger.

That said, stack computing and languages like FORTH have long been underrated. Depending on the application, the combination of stack computers and postfix languages can be quite powerful.

Re:Size and functionality (1)

mrchaotica (681592) | more than 8 years ago | (#15883819)

Depending on the application, the combination of stack computers and postfix languages can be quite powerful.

Why would the type of notation matter? Couldn't you program a stack computer just as well with a prefix functional language like Scheme?

For the same reason language choice always matters (1)

porkchop_d_clown (39923) | more than 8 years ago | (#15883923)

because you should choose a language that fits the problem, not the reverse.

If the problem is "make this work on a stack based machine" then look out! You're gonna have aging LISP programmers crawling out of the woodwork to show off their obsolete, er, elite, programming skills.

Re:For the same reason language choice always matt (4, Informative)

HiThere (15173) | more than 8 years ago | (#15884070)

Sorry, but LISP (though I don't mean Common LISP) is just as much a stack language as FORTH. I think the first LISP that wasn't was LISP 1.5...but I'm rather vague on LISP history. Still, s-expressions are as stack oriented as FORTH is. The interesting thing is the first Algol 60 compiler (well...really an interpreter) I ever used was a stack machine. (That was why it was an interpreter. The real computer was an IBM 7090/7094 BCS system so it ran a stack machine program that Algol was compiled to run on. Whee!) So if you want a good stack computer language you could pick Algol 60. But FORTH is easier, and could even be the assembler language.

OTOH, most FORTHs I've seen use 3 or more stacks. I.e., most of them have a separate stack for floats. What would be *really* nice is if someone built a machine that used Neon as it's assembler. Neon is/was an Object-oriented dialect of FORTH for the Mac that allowed the user to specify early or late binding for variables. It was developed by Kyria Systems, a now-defunct software house. Unfortunately Neon died during a transition to MSWind95. I understand that it is survived by MOPS, but I've never had a machine that MOPS would run on, so I don't know how similar it was.

I think that FORTH would make a truly great assembler...and the more so if that dialect of FORTH were NEON. But I forget how many stacks it used. At least three, but I have a vague memory that it was actually four. The main stack, the return stack, the floating stack, and ??the Object stack??...I don't know.

Re:Size and functionality (4, Insightful)

merlin_jim (302773) | more than 8 years ago | (#15883996)

Couldn't you program a stack computer just as well with a prefix functional language like Scheme?

Sure you can - and it compiles to postfix notation anyways, rather ineffeiciently I might add (get it, add????)

let's say you wanted to write a function like:
function addsubandmultiply(b, c, d, e) {
a = (b + c) * (d - e);
return a;

and you've got assembly level instructions such as mov, add, sub, mult, push, and pop, as well as
the very stack-centric stor and lod, allowing you to move one or more stack variables to memory and
the reverse.

A typical register based computer might compile the above as:

pop b
pop c
pop d
pop e
mov b, ax
mov c, bx
add bx
mov ax, temp_memory
mov d, ax
mov e, bx
sub bx
mov temp_memory, bx
mult bx
push a

Whereas a stack-based computer might compile as:

stor temp_memory
lod temp_memory

In a stack based computer, operations are carried out directly on your stack... it's very convenient,
since most languages compile function calls to use the stack anyways, and as you can see not having
to deal with an accumulator register makes for much terser code. Between 20 - 40% of your compiled code is spent moving data in and out of the accumulator register, since most instructions depend on
specific data being in that register - to the point that they introduced zero-cycle add/mov functionality in the P4 line - basically, if your code performs an add and then movs ax immediately
out to memory (like the above code - and possibly the most common arithmetic operation in compiled code), if the pipeline and data caches are all available, the P4 will
execute both instructions with enough time to put something else in the instruction pipeline that
cycle. It's not really a zero-cycle function - you can do something like 2.5 (add,mov,add,mov,add) a cycle if you stack them back to back to back, for instance...

Yes, Intel released a benchmark for it. No, I can't imagine why you would want to keep adding and moving the results around memory - maybe some esoteric functions like a fibbanoci generator or even a DSP algorithm of some sort might need to do it, but I don't think it'll be all that often... or that any compiler would have an optimisation to specifically output that sequence if appropriate...

Linking to 300MB video files from Slashdot? (2, Funny)

Colin Smith (2679) | more than 8 years ago | (#15883743)

Someone's having a larf. Oh you do crack me up Messrs mymanfryday and CmdrTaco.

Please try the bittorrent. No, wait... Teach em a lesson, make em burn.


.NET Compatibility (4, Interesting)

DaHat (247651) | more than 8 years ago | (#15883744)

Interestingly enough the Microsoft Intermediate Language (MSIL) that .NET apps are compiled to before being JITed into machine code is actually built around a stack based system as well... No doubt porting the .NET Framework over to such a system would be quite easy... and give much in the way of performance boosts (especially on startup).

Of course... that would still depend on a version of Windows for it to run on.

Re:.NET Compatibility (3, Informative)

jfengel (409917) | more than 8 years ago | (#15883788)

The Java Virtual Machine is also sort of stack-based. The JVM bytecode set uses stack operations but the safety checks that it runs make it equivalent to a sliding set of registers not unlike, say, the SPARC architecture. A JIT implementation could do away with the stack, at least in the individual method calls, though the call-return stack would still have to be there.

Re:.NET Compatibility (1)

evil_Tak (964978) | more than 8 years ago | (#15884053)

Nah, just some platform for []

They're great (5, Funny)

TechyImmigrant (175943) | more than 8 years ago | (#15883746)

Mathematicians like stack computers because its easier to formally prove the behaviour of algorithms using stacks.
Hardware engineers like stack computers because the hardware is interesting and easy to design
Investors hate them because they keep loosing money on them.

We are heard this before... (3, Funny)

creimer (824291) | more than 8 years ago | (#15883750)

Apparently NASA uses stack computers in some of their probes.

In space no one can hear you blue screen of death. Unless you work for Lucas Films.

PC Stacks (5, Funny)

celardore (844933) | more than 8 years ago | (#15883758)

I once had a job where I had to sort through stacks of computers. Overall the stacks were pretty useless, a bunch of burnt out 286s. Even if you put all your redundant computing power into a stack doesn't neccesarily make it better!

Awesome (4, Insightful)

LinuxFreakus (613194) | more than 8 years ago | (#15883760)

Does this mean my old HP48GX will be considered cutting edge? I should get ready to sell it on EBay when the craze hits! All my old classmates will be forced to allow me to have the last laugh after I was on the recieving end of much ridicule for using the HP when the TI was the only thing "officially" endorsed by all the calculus textbooks. I don't know if I could ever part with it though. I still use it almost daily, the thing continues to kick ass.

Re:Awesome (0)

Anonymous Coward | more than 8 years ago | (#15883913)

You should have beat them uncounsious with your calculator, then resumed performing real mathmatics. HP calculators were built like tanks and used by tank designers. One of my classmates accidentally ran his over with his car, it didn't break it.

Re:Awesome (1)

pclminion (145572) | more than 8 years ago | (#15884071)

Just because the HP calc used RPN doesn't mean the CPU itself was stack based. Does anybody know the specific processors used in those calculators?

Re:Awesome (1)

LinuxFreakus (613194) | more than 8 years ago | (#15884136)

Yes, they used 8 bit saturn microprocessors. I believe they are in fact register based. But my comment was meant to be sarcastic anyway :)

Re:Awesome (1)

seminumerical (686406) | more than 8 years ago | (#15884232)

I have an HP 48GX. One cannot think with a TI in hand. Once one has typeset the equation and carefully paired off the matching parentheses a TI reports the answer, but one isn't involved. On the other hand, solving problems with an HP is an interactive, instructive, and inspirational process. I'd be happy if they gave five function RPN calculators to elementary school students (reciprocal, 1/x, would be the fifth function).

But by preference I use a 1970's HP 65 (with a cardreader! I got it on Ebay). When they gave HP's more power and more features, they made them less good. HP peaked with the HP 67. Also HP failed to give the 48 GX a good programming language, and they gave up on the mythology of greatness.

I got in trouble in college for designing a stack computer and writing a sample program for it, in defiance of instructions to design a register based machine as an assignment.

Forth? (2, Informative)

dslmodem (733085) | more than 8 years ago | (#15883769)

I remember that FORTH is a language support STACK COMPUTING. Hopefully, it is not totally wrong. Unfortunately, it is really hard to understand FORTH program. guage []

Re:Forth? (1)

dslmodem (733085) | more than 8 years ago | (#15883797)

FORTH is a fun language to learn though. I've had 4 happy months with it. Just think about, I started to speak with others like:

"wash your face; wash your mouth; brush your teeth;", which is totally out of common order. :-)

Re:Forth? (2, Informative)

CaptnMArk (9003) | more than 8 years ago | (#15883998)

wouldn't that be more like:

face; mouth; teeth; brush; wash; wash;

There is one very widely used FORTH-type language (2, Insightful)

porkchop_d_clown (39923) | more than 8 years ago | (#15883906)

that almost every /. user encounters every day: Postscript and PDF.

Re:There is one very widely used FORTH-type langua (1)

$RANDOMLUSER (804576) | more than 8 years ago | (#15884078)

Chu__ Moo__ is my hero!

When will you learn? (1)

frostilicus2 (889524) | more than 8 years ago | (#15883780)

Universities may have tons of bandwidth, but the servers just can't take it. Looks like this site's dead.

Anyone seeding the torrents yet?

Does it run Windows?!? (5, Funny)

Stealth Dave (189726) | more than 8 years ago | (#15883785)

I wonder if Windows will be supported on a stack computer in the future?

No, no, no, NO! This is SLASHDOT! The proper response is "Does it run Linux [] "?

Re:Does it run Windows?!? (1)

Kaenneth (82978) | more than 8 years ago | (#15884036)

I think you mean a Beowolf Cluster...

Re:Does it run Windows?!? (2, Funny)

doi (584455) | more than 8 years ago | (#15884139)

I think you mean a Beowolf Cluster...

I think you mean a Beowolf STACK...

Which would be better, a cluster of Beowolf Stacks, or a stack of Beowolf Clusters? Of course, the answer is a stacked cluster of Beowolf Clustered Stacks.

Next Generation Stack Computing (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15883786)

He also claims that a kernel would only be a few kilobytes large! I wonder if Windows will be supported on a stack computer in the future?

Very unlikely. All Windows bugs won't fit in a few kilobytes.

Next Generation? (1)

Lord Ender (156273) | more than 8 years ago | (#15883787)

Patrick Stewart would be displeased by this misleading headline.

X86 FPU's finally losing their stackness (4, Interesting)

GGardner (97375) | more than 8 years ago | (#15883793)

Since the dawn of time, the x86 FPU has been organized as a stack, which has been recognized as a mistake by modern computer architects. For one thing, it is hard to get a stack architecture to take advantage of multiple functional units. Only recently, with the development of SSE, 64 bit modes and other additions have we been able to move away from the stack on the x86.

Re:X86 FPU's finally losing their stackness (2, Interesting)

Anonymous Coward | more than 8 years ago | (#15883943)

we been able to move away from the stack on the x86

As someone who has written several Forth compilers for the x86 I'd like to point out that the design of the stacks on the x86 is very inefficient. The main stack is just that: a stack tied to one particular register. The FP stack was just a joke; a dead weasel could have designed something better. Anyway, I do like using Forth even under the x86 model - it's nice to know that my entire programming system fits into the on-die cache!

Re:X86 FPU's finally losing their stackness (2, Funny)

Tumbleweed (3706) | more than 8 years ago | (#15884079)

Since the dawn of time, the x86 FPU has been organized as a stack

No no no, since the dawn of time, Man has yearned to destroy the Sun!

x86 came much later, right after the COBOL and the other dinosaurs.

mod parent funny! (1)

RingDev (879105) | more than 8 years ago | (#15884089)

Okay, that's a priceless quote!


Cup of Joe (1)

AKAImBatman (238306) | more than 8 years ago | (#15883794)

Expert Eric Laforest talks about stack computers and why they are better than register-based computers. Apparently NASA uses stack computers in some of their probes.
Therefore, we should consider moving to Java-based Operating Systems and accelerator chips!


In case anyone is wondering, I'm only half joking. Java is a stack-based platform, perfectly suited to processors that don't actually exist in real-life. Sun created the picoJava [] in the 90's, and claimed that it was faster than the Pentium of the day. They may have been correct at the time, but the chip was never widely used, so it was difficult to say for sure. With CPU speed becoming less important than stability, I/O, and correctness of code, it's possible that such machines may start showing up in more mainstream applications.

Re:Cup of Joe (1)

4815162342 (940334) | more than 8 years ago | (#15883981)

Actually, the idea of processors running Java naively hasn't died.
Some modern embedded processors have been specifically designed to execute Java naively. e.g. ARM Jazelle and the new Atmel AVR32.
These processors use various hardware mechanisms to make part of the RISC register file look like a stack. They then convert as much as 80% of the JAVA instruction set into native RISC instructions and interpret only those instructions which they cannot convert in simple hardware.
Such processors are mainly targeted at the increasing number of embedded devices which run JAVA. e.g. Mobile phones.

Re:Cup of Joe (0)

Anonymous Coward | more than 8 years ago | (#15884013)

Ever hear of JStamp? Java is even used in real-time systems. []

Fun and games (4, Funny)

Carnildo (712617) | more than 8 years ago | (#15883796)

It's all fun and games until someone hits a stack underflow.

Re:Fun and games (1)

Eric LaForest (994533) | more than 8 years ago | (#15884050)

Cute. :)
Then you either fill from memory, or you check for it at compile time.

just confused (1)

crodrigu1 (819002) | more than 8 years ago | (#15883823)

I can hear all this experts, and I wonder why they are THE experts, one of the important movements in programming is non-stack based programming methodologies. ( had another expert opinion on why non-stack programming was better.

Re:just confused (1)

Eric LaForest (994533) | more than 8 years ago | (#15884150)

Both methodologies work. Using stacks is better in the small, when software and hardware size are the limiting factors.

Text of PPT (4, Informative)

Anonymous Coward | more than 8 years ago | (#15883851)

Discovered field by chance in 2000 (blame the Internet)
Hobby project (simulations and assembly) until 2004
Transformed into Independent Study thesis project
Overview of current state of research
Focus on programmer's view

Stack Computers: Origins
First conceived in 1957 by Charles Hamblin at the University of New South Wales, Sydney.
Derived from Jan Lukasiewicz's Polish Notation.
Implemented as the GEORGE (General Order Generator) autocode system for the DEUCE computer.
First hardware implementation of LIFO stack in 1963: English Electric Company's KDF9 computer.
Stack Computers: Origins (Part 2)
Independently discovered in 1958 by Robert S. Barton (US).
Implemented in the Burroughs B5000 (also in 1963).
Better known
Spawned a whole family of stack computers
The First Generation
The First Generation: Features
Multiple independent stacks in main memory
Stacks are randomly accessible data structures
Contained procedure activation records
Evaluated expressions in Reverse Polish Notation
Complex instructions sets trying to directly implement high-level languages (e.g.: PL/1, FORTRAN, ALGOL)
Few hardware buffers (four or less typically)
Supplanted in the 1980's by RISC and better compilers
Stack Computers: A New Hope
Enter Charles H. ("Chuck") Moore:
Creator of the stack-based FORTH language, circa 1970
Left Forth, Inc. in 1981 to pursue hardware implementations
NOVIX (1986), Sh-BOOM (1991), MuP21 (1994), F21 (1998), X18 (2001)
Currently CTO of Intelasys, still working on hardware
product launch expected April 3, 2006 at Microprocessor Summit
Enter Prof. Philip Koopman, Carnegie-Mellon University
Documented salient stack designs in "Stack Computers: The New Wave", 1989
The Second Generation
The Second Generation: Features
Two or more stacks separate from main memory
Stacks are not addressable data structures
Expression evaluation and return addresses kept separate
Simple instruction sets tailored for stack operations
Still around, but low-profile (RTX-2010 in NASA probes)
Strangely, missed by virtually all mainstream literature
Exception: Feldman & Retter's "Computer Architecture", 1993
Arguments and Defense
Taken from Hennessy & Patterson's "Computer Architecture: A Quantitative Approach", 2nd edition
Summary: Valid for First Generation, but not Second
Argument: Variables
More importantly, registers can be used to hold variables. When variables are allocated to registers, the memory traffic reduces, the program speeds up (since registers are faster than memory), and the code density improves (since a register can be named with fewer bits than a memory location).
  [H&P, 2nd ed, pg 71]
Manipulating the stack creates no memory traffic
Stacks can be faster than registers since no addressing is required
Lack of register addressing improves code density even more (no operands)
Globals and constants are kept in main memory, or cached on stack for short sequences of related computations
Ultimately no different than a register machine
Argument: Expression Evaluation
Second, registers are easier for a compiler to use and can be used more effectively than other forms of internal storage. For example, on a register machine the expression (A*B)-(C*D)-(E*F) may be evaluated by doing the multiplications in any order, which may be more efficient due to the location of the operands or because of pipelining concerns (see Chapter 3). But on a stack machine the expression must be evaluated left to right, unless special operations or swaps of stack position are done.
  [H&P, 2nd ed, pg. 71]
Less pipelining is required to keep a stack machine busy
Location of operands is always the stack: no WAR, WAW dependencies
However: always a RAW dependency between instructions
Infix can be easily compiled to postfix
Dijkstra's "shunting yard" algorithm
Stack swap operations equivalent to register-register move operations
Stack are inverse registers!
Argument: Stacks Are Bad, Mmmm'kay?
Performance is derived from fast registers, not the way they are used.
The stack organization is too limiting and requires many swap and copy operations.
The stack has a bottom, and when placed in slower memory there is a performance loss.
[H&P, 2nd ed, pg. 113]
(citing older sources)
Then it shouldn't matter that they're in a stack, and it's faster anyway.
Figure out sequence of operations at design or compile time. Swaps and copy operations can replace moves, loads and stores in some cases.
Strawman argument: all stacks have a bottom, and stacks 16-deep or more rarely require spilling to memory (R, R>
Address Register
>A, A>, A@, A!, A@+, A!+
NOP, UNDEF (4 left), PC@ (free!)
So What's It Good For?
Fast procedure calls
fast interrupts
Compact code
Reduced system complexity
Shorter pipeline
Simpler compilation
Consistent instantaneous performance
Hard Real-Time
For small systems: more !/$
The Future

The Software: Time
C: Edit-Compile-Execute
Lisp: Read-Eval-Print (REPL)

Compile Time
Run Time
The Software: Kernel
Name & Code Dictionaries, Input Buffer, Counted Strings
(CALL), (RET), (JMP), (JMP0), (JMP+), (LIT)
(DROP), (+), (A@+), etc...
How big? ~800 32-bit words (or less!)
The Software: Raw Stuff
call SCAN call DEFN RET
: l
The Software: Medium-Rare
call LOOK call POP_STRING call (CALL) RET

: c SCAN SCAN $c SCAN $c $c RET
call SCAN call $c RET

: n c SCAN c NUMI (RET)
call SCAN call NUMI RET
Macros: if...else
: abs (DUP) if- c negate ; else ;

: if- n# 0 c (JMP+) l# HERE_NEXT (@) (N-) 1 ;
: else (>R) c NEW_WORD (R>) (!) ;

DUP JMP+ call negate RET RET
Macros: Higher-Order Functions
: mapgen
c (DUP) c (>R)
l# STRING_TAIL c (CALL) c (R>)
n 1 # c # c (+)
c (DUP) c l c (CALL) (>R) c # (R>) c (+)
c (OVER) c (OVER) c (XOR)
c if (>R) c (JMP) (R>)
c else c (DROP) c (DROP) c ; ;
(DUP) (>R)
[LIT 1] (LIT) (+)
(DUP) l (CALL) >R (LIT) R> (+)
if >R (JMP) R>
else (DROP) (DROP) (RET) RET

Macros: Higher-Order Functions (2)
(DUP) (>R)
[LIT 1] (LIT) (+)
(DUP) l (CALL) >R (LIT) R> (+)
if >R (JMP) R>
else (DROP) (DROP) (RET) RET
: cipher n 2 mapgen encoder
[LIT 1] +
DUP call encoder [LIT 2] +
SCAN BlahBlahBlahBlahBlah l INPUT @ cipher
The Software: In Progress
Source code:
Extensions: 6671 bytes
Simple virtual machine: 3903 bytes
Metacompiler: 6641 bytes
Simple RPC: 1507 bytes
Total binary size, incl. Kernel: 6000 kwords (32-bit)

Re:Text of PPT (1)

Eric LaForest (994533) | more than 8 years ago | (#15883927)

Thank you!

JVM (4, Informative)

TopSpin (753) | more than 8 years ago | (#15883855)

Java bytecode is interpreted on a virtual stack based processor. Most bytecode gets JITed into native register based instructions, but the model JVM processor is a stack processor.

Some previous poster noted that CLI is also a stack based model. I can't verify that myself but it wouldn't surprise me; Microsoft is, after all, highly 'innovative' or something.

Re:JVM (3, Informative)

Anonymous Coward | more than 8 years ago | (#15883994)

Its not like Java was super-innovative to use the stack-based architecture. Java was designed with web-applications in mind, and as such having small code size was extremely important for bandwidth reasons. One of the best features of stack machines is the small instruction size (no need to store register locations). So a stack machine is a natural choice for the JVM. If you wanna nag on .NET copying Java, there are plenty of good reasons, but this isn't one.

Appropriate instruction set (4, Insightful)

dpilot (134227) | more than 8 years ago | (#15883908)

Even in assembler, the mainstream hasn't been programming to the metal since Pentium I.

Beginning with Pentium II, and propagating to pretty much all of the other archictures in a short time, non of the mainstream CPUs have exposed their metal. We have an instruction set, but it's torn into primitives and scheduled for execution. We don't see the primitives, not even in assembler. AFAIK, there isn't even a way to use the true primitives, except perhaps on the Transmeta, where it was undocumented.

So in this light, since we're already fairly far from the true metal, it seems to me that it makes a lot of sense to re-evaluate the instruction set itself. Of course one could raise the Itanium argument, but I would also argue that politics were too big a part, there. Then again, one could also argue that x86 and amd64 are just so entrenched that it doesn't matter, and they do run well on today's hardware.

Then again I could cite my old favorite, the 6809. It started from the same origins and precepts as RISC, but a different attitude. RISC simply tried to optimize the most common operations, at the expense of less common ones. With the 6809, they tried to understand WHY certain things were happening, and how those things could be done better and faster. They ended up with a few more transistors, the same speed, and something approaching 3X the throughput, as compared to the 6800. More similar to the current topic, there was a paper on 'contour mapping', mapping blocks of cache into stacks and data structures. The 6809 was too old for a cache, but it seems to me that combining it's concepts with the contour mapping would be interesting indeed.

But like stack engines, it's not x86/amd64 compatible.

Why these downright stupid comments? (3, Insightful)

Jerk City Troll (661616) | more than 8 years ago | (#15883920)

You “wonder if Windows will run on a stack computer?” Where do you people come up with this nonsense? This is as irrelevant as saying: "someday, car tires will not be made of rubber. I wonder if Windows will support them?" Really, there is no need to try to come up with insightful remarks or questions to tack on the end of your story submissions. Just present the article and leave it at that. Let everyone else do the thinking.

Stop Hurting My Eyes (5, Informative)

Anonymous Coward | more than 8 years ago | (#15883933)

Dear Slashdot Contributors,

Please stop describing undergrads doing independent studies as "Experts". Theres a reason that mainstream processors haven't picked up on "Stack Processors", and it has nothing to do with binary compatibility, the difficulty of writing a compiler for their instruction set, or general programming complexity. Stack Machines are really only good for In-Order processing. Wonder why NASA probes have Stack Processors? Because they don't freaking need to do out of order processing in order to get the performance they require, and they probably found stack processors to have a favorable power / performance ratio for their application. You will never see a full blown Windows running on a Stack processor, because Superscalar processors destroy their performance.

"My research project shows that some people wrote nifty papers in the 1970s, but everyone ignored them for an obvious reason I don't understand." -> Not an Expert

Re:Stop Hurting My Eyes (2, Insightful)

HiThere (15173) | more than 8 years ago | (#15884222)

I believe that your criticisms apply to only specific stack based architectures. That they do apply to the commonly presumed architectures I accept, but this is far from asserting them as general truths.

Actually, even asserting that register based computers solve the problems that you are describing is not a general truth. You need to specify how many registers of what type can deal with how many out of order processes. And I suspect that a stack computer with 5 or 6 stacks could deal as flexibly with that problem as is commonly required...with a bit of extra leeway. It would need to be able to implement rapid task switching based on a "high priority task stack"...and maintaining that would be a bit of a nuisance...but that particular stack could have a very short limit, say 50 items. I'll agree that this is one function that it would be better to handle in scratchpad memory, but it would be eminently possible to do it purely from a stack based approach. (Still, there's a good reason that priority queues are queues rather than stacks...and I would argue that a dequeue would be an even better approach.

Well, I'm neither a hardware engineer nor a computer system designer, so I could be wrong. OTOH, you're anonymous, which means that your arguments only deserve the weight that their own internal logic provides.

Re:Stop Hurting My Eyes (0)

Anonymous Coward | more than 8 years ago | (#15884248)

The difficulty of writing a compiler for their instruction set, or general programming complexity. Stack Machines are really only good for In-Order processing.

Nope; wrong on both counts. Forth compilers are quite easy to write and one which optimises to better quality code than most C compilers is not as hard to write as those same C compilers. Out of order processing has been done in stack machines for years now. The reason it's not mainstream is the same reason no processor is mainstream other than the crappy Intel architecture: scale of pre-deployment and economies of production. Simply put - no new system of computing can get a hold in the market until the current Intel/AMD design finally hits the wall for good. Until then the cost/benifit of backwards compatibility and large scale fabs will keep stack machines and pretty well anything else in the fringes.

Question about stack computer types (3, Funny)

thewiz (24994) | more than 8 years ago | (#15883956)

Do these come in short- and tall-stack versions?
Are maple syrup and butter options?

Re:Question about stack computer types (1)

Tumbleweed (3706) | more than 8 years ago | (#15884091)

Do these come in short- and tall-stack versions?
Are maple syrup and butter options?

I'm more into grid computing, where it's all about the waffles!

Postscript is based on stacks (1, Informative)

Anonymous Coward | more than 8 years ago | (#15883983)

Once upon a time when laser printers cost $10K, before CorelDraw, I learned to program Postscript. Yes it's a real programming language but it prefers to do things with stacks. You could write real programs that did real calculations etc. What a pain. I'd rather program in machine code. I'd rather program in microcode. aargh. You get the point.

I still use postscript to create graphics but if any computation is involved, I use another language.

Having said the above I realize that most languages insulate you from the architecture. If you're programming in Python or Java, you probably wouldn't notice the difference. My experience with Postscript convinces me that such an architecture isn't as efficient as some people think it is though. There's just too much popping and pushing just to get at a value. Since a given value changes its position on the stack, you also need to keep track of where it is. Bleah. This isn't to say that stacks don't have a place. A variant of a stack called a circular buffer really speeds things up on a dsp. For general purpose use though ...

Stack machines - again? (3, Insightful)

Animats (122034) | more than 8 years ago | (#15883987)

Who can forget the English Electric Leo-Marconi KDF9 [] , the British stack machine from 1960. That, and the Burroughs 5000, were where it all began.

Stack machines are simple and straightforward to build, but are hard to accelerate or optimize. Classically, there's a bottleneck at the top of the stack; everything has to go through there. With register machines, low-level concurrency is easier. There's been very little work on superscalar stack machines. This student paper from Berkeley [] is one of the few efforts.

It's nice that you can build a Forth machine with about 4000 gates, but who cares today? It would have made more sense in the vacuum tube era.

Re:Stack machines - again? (1)

powerlord (28156) | more than 8 years ago | (#15884109)

It's nice that you can build a Forth machine with about 4000 gates, but who cares today

Considering that we seem to be entering the vacuum tube era in nano-tech, perhaps a 4000 gate forth machine can be used to run programmable nano-machines.

Maybe funny only to me, but.. (0, Redundant)

totallygeek (263191) | more than 8 years ago | (#15883990)

If I saw one, I would exclaim, "Wow, that computer is stacked!"

Not a good idea (4, Insightful)

coats (1068) | more than 8 years ago | (#15883997)

The reason modern systems are so fast is that they hide a lot of fine grained parallelism behind the scenes. It is very hard to express this kind of parallelism in a way that it can be executed on a stack machine.

How important is this parallism? Consider that modern processors have 10-30 pipeline stages, 3-6 execution units that can have an instruction executing at each stage; moreover, most of them have out-of-order execution units that handle instructions more in the order that data is available for them rather than the order they are listed in the object file (and main memory is hundreds of times slower than the processors themselves, so this is important!). Typically, such processors can have more than 100 instructions in some stage of execution (more than 250 for IBM POWER5 :-)

Consider, also, that the only pieces of anything-like-current stack hardware are Intel x87-style floating point units, that Intel is throwing away -- for good reason! -- in favor of (SSE) vector style units. In the current Intel processors, the vector unit emulates an x87 if it needs to -- but giving only a quarter of the performance.

Someone made remarks about Java and .Net interpreters: in both cases, the interpreter is simulating a purely scalar machine with no fine grained parallelism; no wonder an extensible software-stack implementation is one of the simplest to implement. Stacks are not the way that true Java compilers like gjc generate code, though!

No, stack-based hardware is not a good idea. And haven't been since some time in the eighties, when processors started to be pipelined, and processor speed started outstripping memory speed.

Re:Not a good idea (0)

Anonymous Coward | more than 8 years ago | (#15884133)

But stack-based hardware is much simpler; it would run at a higher clock rate, and you could put dozens of them on the same piece of silicon required for one VLIW CPU.

NASA (4, Insightful)

HTH NE1 (675604) | more than 8 years ago | (#15883999)

Apparently NASA uses stack computers in some of their probes.

Is that supposed to be a ringing endorsement? I thought NASA was using components the rest of the world treated as obsolete due their proven durability and reliability in the radiation of space.

Re:NASA (2, Interesting)

Medievalist (16032) | more than 8 years ago | (#15884186)

I thought NASA was using components the rest of the world treated as obsolete due their proven durability and reliability in the radiation of space.
Essentially correct. It is so costly and time-consuming to get a new component certified for use that it's usually less work to find a clever way to use old components. Then ten months after launch production ceases on the old part, and you have to have special ones built at a hundred times the cost (military option) or scavenge them on eBay (scientific option).

"good enough" syndrome (0)

Anonymous Coward | more than 8 years ago | (#15884003)

A big problem with trying to sell stack-based CPUs, or for that matter any hardware different from what "everyone else" uses, is that since they're highly incompatible everything would have to be rewritten for them, and besides, the stuff that's already out there is "good enough". This is one of the reasons for the big Apple/x86 battle that still rages today, and also comes into effect with Linux and other *ix OSes.

The only way, IMO, that a stack-based machine would ever get a toe in the door would be if one were produced that was priced similarly to the current x86 platforms but was several times as fast, or was within 3X the price and was two orders of magnitude faster.

FORTH post! (2, Funny)

Anonymous Coward | more than 8 years ago | (#15884021)

Nothing to see here. Sorry.

No, they're not better (3, Interesting)

vlad_petric (94134) | more than 8 years ago | (#15884046)

I didn't even try to torrent-download the thing, but I can tell you why stack machines aren't better than register-based ones. The main reason is that it's much much much harder to do renaming of a stack than of"regular" registers. Without renaming you can't do out-of-order execution ... Currently, there are two "mainstream" architectures that include stacked register files: Itanium and Sparc. Neither of them have out-of-order implementations.

But why do you need out-of-order execution? Well, misses to memory are very expensive these days - it can easily take from 200 to 400 cycles to service a load that misses all the way to main memory. This can have a significant effect on performance. What out-of-order execution does is to allow independent instructions that are younger than the load to execute in parallel with it. Quite often these parallely-executed instruction will generate other misses to main memory, overlapping their latencies. So - latency of loads that miss is still very high, but at the very least the processor is not idle while servicing them (for a good read see "MLP Yes! ILP no!" by Andy Glew)

Itanium and Sparc compensate for the fact that they don't do stuff out-of-order by putting sh*tloads of L2/3 cache on-chip. The cost of a miss is still very high, but it happens much less often. The manufacturing cost of a chip is also much higher.

Note that what NASA is sending into space is "old" tech. The reason - well, cosmic rays are much stronger in outer space, and the smaller the gate, the easier it is for them to flip its state.

P.S. I'm a computer architect.

Always fighting the wrong battle (1)

91degrees (207121) | more than 8 years ago | (#15884092)

A 1K kernel will hardly make a lot of difference to anyone. I have a whole gigabyte of RAM in my machine. 1K? 100K? 1 Megabyte? 10 Megs? That's a pretty chunky kernel and we're still taking up a trivial chunk of our memory. Even the huge executables that we have these days aren't causing serious memory issues on modern PCs. Memory has outpaced executable size.

What we do want is lower power, and smaller. We can always take advantage of small or low power devices. If you want to reduce executable size, write a simple stack machine and an interpreter.

Stack computers are hardly new (4, Insightful)

Junks Jerzey (54586) | more than 8 years ago | (#15884100)

Normally this kind of stuff doesn't bug me, but this is like an article in 2006 proclaiming the benefits of object-oriented programming. Doesn't anyone know their computing history?

There were stack computers in the 1960s and 1970s. There was a resurgence of interest in the 1980s--primarily because of Forth's popularity in embedded systems--resulting in a slew of stack-based "Forth engines." Forth creator Chuck Moore has been working on a series of custom Forth CPUs for 20+ years now. His latest version has 24 cores on one chip (and was entirely designed by one person and uses MILLIWATTS of power).

Stack processors and languages have one big advantage: they minimize the overall complexity of a system. The tradeoff is that they often push some of that complexity onto the programmer. That's why Forth tends to shine for small, controlled systems (like a fuel injector or telescope controller or fire alarm), but you don't see people writing 3D video games or web browsers in Forth.

Re:Stack computers are hardly new (1)

Sebastopol (189276) | more than 8 years ago | (#15884213)

how fast does it run specINT @milliwatts? It doesn't? Oh, there ya go.

Saying it is low power is meaningiless when there are CPUs that use hundreds of MICROWATTS in a plethora of embedded devices today.

Re:Stack computers are hardly new (1)

Junks Jerzey (54586) | more than 8 years ago | (#15884236)

Okay, I posted before reading the article. Can you tell :)

I think the fault is more with the submitter of the story than the author's presentation. The author just gives an overview of how stack computers work and their history. The submitter apparently never knew about stack computers and is all excited about them as a possible future of computing. The presentation is simple a history, mostly about stuff that's 20+ years old. So, yes, while stack processors have been commercially available and they've been used in commercial embedded systems work, they've failed to get outside that realm.

(As an aside, I'd say 99% of desktop programmers have no clue about all the stuff that's gone on in the embedded systems world, so it isn't surprising that old stuff seems new.)

Can we bring back BetaMax too? (1)

JoshDM (741866) | more than 8 years ago | (#15884112)

That would rock! Finally, the supposedly superior yet classically discarded technologies are coming back!

Java (1)

HeaththeGreat (708430) | more than 8 years ago | (#15884203)

If stack-based hardware does become more the norm, JIT JVMs will become a lot simpler since the JVM is stack-based.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>