Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

CPU DB: Looking At 40 Years of Processor Improvements

timothy posted more than 2 years ago | from the why-in-my-day-we-didn't-have-binary dept.

Hardware 113

CowboyRobot writes "Stanford's CPU DB project (cpudb.stanford.edu) is like an open IMDB for microprocessors. Processors have come a long way from the Intel 4004 in 1971, with a clock speed of 740KHz, and CPU DB shows the details of where and when the gains have occured. More importantly, by looking at hundreds of processors over decades, researchers are able to separate the effect of technology scaling from improvements in say, software. The public is encouraged to contribute to the project."

Sorry! There are no comments related to the filter you selected.

Piss frost! (-1)

Anonymous Coward | more than 2 years ago | (#39606779)

Prost fiss!

Re:Piss frost! (-1)

Anonymous Coward | more than 2 years ago | (#39607303)

We really need a frosty piss database to keep track of the piss temperature on slashdot. I suspect it's a leading indicator of globull warming, so studying it is of the utmost importance.

An even longer way (5, Interesting)

hendrikboom (1001110) | more than 2 years ago | (#39606799)

Processors have come an even longer way wince the days when main memory was on a magnetic drum, and the machine had to wait for the drum to revolve before it could fetch the next instruction. That was the first machine I used.

Re:An even longer way (4, Funny)

dargaud (518470) | more than 2 years ago | (#39606889)

OK, I'll get off your lawn without even making a snarky comment... Are you Mel [pbm.com] by any chance ?

Re:An even longer way (5, Funny)

K. S. Kyosuke (729550) | more than 2 years ago | (#39607007)

It could be him...look, he only has a /. ID of 154 and he managed to make Slashcode print it out in binary to boot!

Re:An even longer way (5, Funny)

hendrikboom (1001110) | more than 2 years ago | (#39607243)

How I did that will have to be my little secret.

-- hendrik

Re:An even longer way (2)

K. S. Kyosuke (729550) | more than 2 years ago | (#39607265)

(OK, OK, make that 78...an off-by-one-zero error ^_~)

Re:Not Mel (3)

hendrikboom (1001110) | more than 2 years ago | (#39607239)

No, I'm not Mel. I did try programming it in machine code, though, instead of the high-level (all numerical, though) interpreted language it provided, and got nowhere. Perhaps I needed an oscilloscope as a debugging tool, and didn't have one. What I managed to learn of the machine language I figured out by reading the circuit diagrams. But I wasn't Mel, and I really appreciate decent languages and programming tools. Pity there are so few of them, even now. The best seem never to have been popular.

-- hendrik

Re:Not Mel (1)

dargaud (518470) | more than 2 years ago | (#39607443)

I wonder what you do consider the 'best languages and programming tools'.

PS: The guy who taught me programming in 1981 was 75 years old and had been a paratrooper during WWII before getting into electronics and computers. I don't know why I just remembered that...

Re:Not Mel (4, Interesting)

hendrikboom (1001110) | more than 2 years ago | (#39608565)

I really like strongly typed, garbage-collected, secure languages that compile down to machine code. I've used the excellent and fast Algol 68 compiler long long ago on a CDC Cyber computers, and now I use Modula 3 on Linux, when I have a choice. They compile down to machine code for efficiency, and give access to the fine-grained control of data -- you can talk about bytes and integers and such just as in C, but they still manage to take care of safe memory allocation and freeing.

Modula 3 is a more modern language design, though I have a subjective preference for the more compact and freer-style ALgol 68 syntax. Modula 3 has a clean modular structure which is completely separate from its object-oriented features. You're not required to force everything into object types. You can if it's fits your problem, but you can still use traditional procedural style if that's what you need.

And Modula 3 functions well as a systems programming language. It has explicit mechanism to break language security in specific identified parts of a program if that's what's necessary. It almost never is.

And, by the way, to avoid potential confusion, Modula 3 was *not* designed by Niklaus Wirth. Modula and Modula 2 were, but Modula 3 is a different language from either.

-- hendrik
 

its not the language any more, its libs and tools (3, Interesting)

cheekyboy (598084) | more than 2 years ago | (#39609651)

It doesnt matter how good any language is, its the available framework tools and libraries which make it useful.

ie, C by it self, is simple and takes lots of coding to make something useful (if you do not use any libs at all)

Strongly type langs can be a bitch if you are editing in VI/VIM, as they dont 'know about the language' to auto help you out, besides colors. Today cpus are so fast, the IDE should help you program, and act as if types are free, and the IDE can auto determin types, and fix it for you. Otherwise if you have to spend 20% of the byte space of your language defining types and pre casting EVERYTHING, then its not an efficient and smart human friendly language.

Its kind of funny, that in assembly language, you only really have 3 types of ints plus floats, and pointers, and are free to interpret values to your imaginations content.

If you have to reinvent everything because its not there, then youre wasting your time.

Re:its not the language any more, its libs and too (4, Informative)

hendrikboom (1001110) | more than 2 years ago | (#39610203)

The point of strong typing is not to tell the compiler how to make your program efficient. That's a pleasant side effect, but it's not the point at all. The point is to tell the compiler, possibly redundantly, what kind of values you intend to have in variables, so it can tell you when you get it wrong. Proper strong typing can catch almost all coding errors before your program is ever executed. It takes longer to code it and get it through the compiler, sure, but the time you lose this way is far outweighed by the reduced time you spend debugging.

In addition, the explicit type information on all function definitions makes it vastly easier to understand a program you've never seen before when it's handed to you.

Yes, there are a few situations where you can't specify types statically. They are pretty rare in a properly designed type system. A good language has mechanisms to handle the few cases that still remain.

-- hendrik

Re:its not the language any more, its libs and too (1)

Tablizer (95088) | more than 2 years ago | (#39611119)

Proper strong typing can catch almost all coding errors before your program is ever executed. It takes longer to code it and get it through the compiler, sure, but the time you lose this way is far outweighed by the reduced time you spend debugging.

Hogwash. It depends on the domain (industry). Type-related errors make up only a minority percent of errors I encounter in dynamic languages (and many of them could be caught with Lint-like "suspicious" code detectors).

Plus, debugging is often faster with dynamic languages because you don't have to wait for the long compile step to study the bug by altering the code, adding logging statements, etc.

And, dynamic languages are usually easier to read than compiler-centric languages (at least to my eyes). Easier-to-read means that it's easier to review the code for for bugs and design problems. The heavy-typing/compiler style tends to make the code bureaucratic and long-winded.

Further, dynamism allows more meta-programming techniques to add design-simplification abstractions.

Re:its not the language any more, its libs and too (0)

Anonymous Coward | more than 2 years ago | (#39611355)

Hogwash. It depends on the domain (industry). Type-related errors make up only a minority percent of errors I encounter in dynamic languages (and many of them could be caught with Lint-like "suspicious" code detectors).

A good strongly typed language turns almost all errors into type-errors, that are cought at compile-time. This is taken to an extreme in for example agda where you cannot index an array out of bounds because of its type (which is also provable)

Re:its not the language any more, its libs and too (0)

Anonymous Coward | more than 2 years ago | (#39611375)

Plus, debugging is often faster with dynamic languages because you don't have to wait for the long compile step to study the bug by altering the code, adding logging statements, etc.

And, dynamic languages are usually easier to read than compiler-centric languages (at least to my eyes). Easier-to-read means that it's easier to review the code for for bugs and design problems. The heavy-typing/compiler style tends to make the code bureaucratic and long-winded.

Further, dynamism allows more meta-programming techniques to add design-simplification abstractions.

Have you ever tried Haskell? One normal line of Haskell often takes 10 lines of your average dynamic language. And since Haskell is a functional language it allows very nice abstractions with type-checked meta-programming.

Also debugging is easier in interpreted languages, not dynamic ones, which you seem to confuse.

Re:its not the language any more, its libs and too (1)

gander666 (723553) | more than 2 years ago | (#39612175)

dang, my mod points expired yesterday. I want to mod this up so much.

Re:Not Mel (0)

Anonymous Coward | more than 2 years ago | (#39610859)

I liked Algol 68 too, but come on, it doesn't cut it any more. Just compare it to a modern language like Java or PHP. There is no equivalent to (PHP): 'foreach', 'explode', 'join', 'stripos', 'date', 'str_replace', 'mysql_query' and associative arrays. Libraries and functionality make all the difference. It takes these types of construct or function to make real progress against real world problems. Otherwise you are left shuffling bits.

In a contest to write real applications Algol 68 is left in the dust. Sad but true.

Re:Not Mel (1)

mikechant (729173) | more than 2 years ago | (#39611957)

I've used the excellent and fast Algol 68 compiler long long ago on a CDC Cyber computers

I've never used Algol 68 'in anger', only played with it, but it always stuck me as rather elegant in an uber-nerd sort of way.

On a fairly trivial note, I always likely the keyword reversal - IF...FI, CASE...ESAC. I found it makes the code logic stand out more than other schemes. Although that does seem to create a potential problem if you want to use a palindromic word...

Re:An even longer way (1)

Sponge Bath (413667) | more than 2 years ago | (#39606903)

More bizarre to me was reading about mercury filled tubes as memory.

Re:An even longer way (2)

spaceyhackerlady (462530) | more than 2 years ago | (#39606969)

Back in the days of magnetic drums it was common for instructions to specify the address of the next instruction, to handle drum rotation latency. Were their assemblers that smart, or did programmers do instruction scheduling by hand?

...laura

Re:An even longer way (4, Informative)

hendrikboom (1001110) | more than 2 years ago | (#39607025)

They were smart. At least the one I had documentation for was. And the programmer could override it if he thought he was smarter. But the assembler needed a newer model of computer than the one I had -- one that could actually read and write letters on its typewriter instead of just digits. (though u, v, w, x, y, z counted as hexadecimal digits)

The machine, in case anyone is interested, was a Bendix G-15d.

That was in 1962, if I remember correctly. I think the machine was about 5 years old. The next year the university got an IBM 1620, with alphanumeric I/O and 20,000 digits of actual core memory. Change was relentlessly fast in those days, too. THe big difference is that every few years we got qualitative, not just quantitative change.

-- hendrik

Re:An even longer way (4, Interesting)

realityimpaired (1668397) | more than 2 years ago | (#39607561)

That was in 1962, if I remember correctly. I think the machine was about 5 years old. The next year the university got an IBM 1620, with alphanumeric I/O and 20,000 digits of actual core memory. Change was relentlessly fast in those days, too. THe big difference is that every few years we got qualitative, not just quantitative change.

We do still get qualitative change in computing today, just that for *most* of what people actually do with computers, they're fast enough that the human is the limiting factor. For anything where human input isn't a factor (think large number crunching operations), there is still a noticeable difference from generation to generation.

Case in point... I do a fairly large amount of video encoding (DVD rips, and other stuff). I use 64-bit software, with a 64-bit operating system. I have recently upgraded from a first generation i7 to a second generation i5. I did go from 4GB to 16GB of RAM, but the actual usage when doing the transcode operation has remained stable, around 1.2GB in use (there's no swapping happening on either system), and the actual type of memory used is the same (speed and bus). That said, the transcode opertation from the original mpeg2 DVD rip to h.264 has gone from about 20 minutes for a 42-minute TV episode to 6 minutes for the same 42-minute TV episode, all else being equal. The difference... I went from a quad core/ht i7 (8 processes at 1.6GHz) to a quad core i5, overclocked (4 processes at 4.7GHz). I went from a top end processor 1 generation old to a current generation midrange processor, and saw a *huge* improvement in performance for a number-crunching heavy operation. now... I am pushing less than double the number of operations per second (8x1.6 = 12.8, 4x4.7 = 18.8), but there is more than a double improvement in real world performance. This is down to improvements in the architecture of the processor, and how it handles the operations.

That being said, my Facebook page doesn't load any faster than it did with the i7 (or on my celeron-based laptop for that matter), and my ability to type is still the limiting factor in how quickly I can use a word processor. If you're not doing heavy number crunching, there is almost no reason to upgrade your computer today (power consumption is an argument that can be made, but the difference is rarely enough to make up for the cost of buying a computer).

Re:An even longer way (2)

hendrikboom (1001110) | more than 2 years ago | (#39608583)

You could still largely use the same code, right? That would make it a quantitative difference.

But there is a saying that an order of magnitude is by itself a qualitative difference.

Re:An even longer way (1)

Anonymous Coward | more than 2 years ago | (#39608649)

Running 8 threads on a parallel task doesn't really result in 8x more processing power. A good solid non-blocking strategy across multiple threads results in exponential gains, but it's rare to find software that's like that. h.264 is a great example though of how effective a solid non-blocking strategy can be in a task that is highly parallel by it's very nature.

Sadly most of the gains in processing speed have been dwarfed by HDD speeds. Recently had to calculate atmospheric data with samples every 30min spaced over 7 days, had all the required data to calculate it ahead of time and on a solid state drive that sat on a PCI-E 8x bus. Still it took several hours utilizing the CPUs and GPUs in the machine, with roughly 40% of that time spent on reads/writes to the HDD.

Re:An even longer way (1)

cheekyboy (598084) | more than 2 years ago | (#39609825)

So how much data? 100 gb? 200gb?

Time to get a new server with at least 64gig ram, and use 4 x 512gb intel SSDs

Yes, its a little more money $320 worth of ram + good mb $1000 + $4000 in SSDs.

Give linux 1500gb of swap and let the OS handle caching.

Do you ever need more than 2 days of data in ram at the same time?

Re:An even longer way (0)

Anonymous Coward | more than 2 years ago | (#39611077)

Running 8 threads on a parallel task doesn't really result in 8x more processing power. A good solid non-blocking strategy across multiple threads results in exponential gains, but it's rare to find software that's like that.

Wow, I sure would like some of that. You do know what exponential means, right? :-P

Re:An even longer way (1)

dgharmon (2564621) | more than 2 years ago | (#39607461)

"Processors have come an even longer way wince the days when main memory was on a magnetic drum, and the machine had to wait for the drum to revolve before it could fetch the next instruction. That was the first machine I used.", hendrikboom

If I recall correctly the drums used capacitors to store the data, very early dynamic RAM ...

Re:An even longer way (1)

hairyfeet (841228) | more than 2 years ago | (#39607813)

Damn and I thought I was old because my first three machines were an Altair, followed by a Trash 80 and a VIC.

You know what i think made as much if not more of an effect on modern computers? RAM. I remember when a Mb RAM chip would cost you more than a car, and I don't give a crap how powerful your CPU is if you are always waiting on some slow media, be it drum or tape or HDD, to feed the chip then its gonna be hamstringed by the slower media. Now even my $350 netbook has 8Gb of RAM simply because its so cheap, and the smallest computer used by the family on a daily basis is 4Gb.

Sure having multicores is nice (my kids just love the new AMD hexa and quad I built them because they never slow no matter how much crap they throw at them) but what good are they if you can't keep the chips fed? Between the huge memory on even low end systems and GP-GPUs allowing us to send certain tasks to the even more insanely fast RAM on the GPU I'd say the price of RAM really changed everything more than the chips did. After all we made leaps and bounds in the 80s but this wasn't nearly as obvious simply because it was hard for programs to take advantage when they were RAM starved. Now with plenty of RAM to buffer even my dinky little AMD netbook plays HD video and even lets me play some L4D or GTA:VC, simply because the RAM is there. Can you imagine trying to do even 1/10th of what we do now with 16Mb of RAM?

Re:An even longer way (0)

Anonymous Coward | more than 2 years ago | (#39607917)

Luxury!

Re:An even longer way (1)

Anonymous Coward | more than 2 years ago | (#39607927)

The first random-access digital storage device was a...http://en.wikipedia.org/wiki/Williams_tube

Re:An even longer way (1)

hendrikboom (1001110) | more than 2 years ago | (#39608595)

I'm not old enough to remember those.

Re:An even longer way (0)

Anonymous Coward | more than 2 years ago | (#39612313)

Not really complete. The 6502 is absent. That was the CPU chip in the original Apple ][. I didn't find the 8008. Seems history isn't worth much these days.

They've come a long way... (5, Funny)

Anonymous Coward | more than 2 years ago | (#39606803)

... but apparently haven't improved enough to survive a beatdown from /.

Wow - is it just me (2)

MerlynEmrys67 (583469) | more than 2 years ago | (#39606809)

Or is it slashdotted already. You would think Stanford would have better infrastructure

Re:Wow - is it just me (1)

Anonymous Coward | more than 2 years ago | (#39606831)

Or is it slashdotted already. You would think Stanford would have better infrastructure

It not just you. Slashdot hits again.

Re:Wow - is it just me (3, Funny)

Anonymous Coward | more than 2 years ago | (#39606835)

It's an Erlang webserver running on a 4004, give it time.

Re:Wow - is it just me (1)

tapspace (2368622) | more than 2 years ago | (#39609693)

It's just some box in some grad student's office somewhere

740 kelvin-hertz? (-1)

Anonymous Coward | more than 2 years ago | (#39606817)

Early processors were so shitty that back then you measured how much they warmed up per second.

Seems rather limited to Intel. (5, Insightful)

Anonymous Coward | more than 2 years ago | (#39606819)

Processors did exist before Intel. IBM, Sperry, Amdal, Burroughs, DEC, Honeywell...

And the speed improvement there paved the way for Intel.

for an "IMDB" of processors, it really needs to include others - ARM, AMD (though that might be covered by the Intel) and still others exist. The DSP processors are also significant as many improvements there migrated to other implementations.

Re:Seems rather limited to Intel. (4, Informative)

mccalli (323026) | more than 2 years ago | (#39606897)

It's slashdotted so I can't tell, but any definitive database really needs MOS and Zilog in there as well. The home and micro computer revolution depended on them, Zilog's Z80 and MOS's 6502.

Cheers,
Ian

Re:Seems rather limited to Intel. (2, Informative)

nurb432 (527695) | more than 2 years ago | (#39606983)

Same problem here.. cant see it..

And especially don't forget Motorola, or IBM, Dec, Sun, RCA, Intersil, TI, MIPS, etc etc... Even within the Intel camp id hope they branch into other architectures other than ix86, like ix432 and960, for example.

Re:Seems rather limited to Intel. (1)

larry bagina (561269) | more than 2 years ago | (#39607321)

I read about it a couple days ago (in ACM) and there was no MOS as of then. There was the Motorola 6800, though.

6800, 6809 (1)

Anonymous Coward | more than 2 years ago | (#39607363)

I've written some programs for the 6800, but I have worked years programming for the 6809. Great CPU, good instruction set, easy to do the hardware interfacing.

Perhaps someone will write a 6809 emulator on a PIC or Atmega microcontroller. That would be fun to play with!

Re:Seems rather limited to Intel. (2)

hairyfeet (841228) | more than 2 years ago | (#39609911)

What amazes me is how long some of these chips have lasted. There are STILL variants on the Z80 in production and Intel only quit making the 386 in 2009. the z80s are used a lot in little MP3 players and I heard the 386 was popular for military applications as an embedded CPU. I don't think anybody still uses the MOS 6502 though.

It just goes to show that if a design is well made it can still find uses even after all these years. I wonder if in 30 years AMD will still have some division making Bobcats for embedded devices or Intel will have someone cranking out CULV Core2Duos for some industrial design.

Re:Seems rather limited to Intel. (1)

Crosshair84 (2598247) | more than 2 years ago | (#39612389)

Very true. The phone line controllers we use at work use 8088 processors. The whole controller requires no active cooling and are incredibly reliable. The "newer" controllers are built far more compact with more processing power and require active cooling and have been incredibly unreliable, we've been removing them from service.

We also still have customers running 286 based voice mail systems because they are so reliable.

In many applications reliability trumps speed. Not to mention that you have a design that is already paid for and can be built on otherwise obsolete chip fabbing equipment OR can be built on newer equipment with die shrinks for better power consumption.

Re:Seems rather limited to Intel. (1)

Relayman (1068986) | more than 2 years ago | (#39610149)

Back up now. Zilog is showing the Z80 but MOS is not there.

Re:Seems rather limited to Intel. (1)

rubycodez (864176) | more than 2 years ago | (#39610355)

don't worry, it'll be used in the future, MOS Technology 6502 instructions were shown scrolling down the view of the Series 800 Model T 101.

Re:Seems rather limited to Intel. (1)

zmollusc (763634) | more than 2 years ago | (#39611891)

Don't model #22 bending units use a 6502 processor?

Re:Seems rather limited to Intel. (2)

shokk (187512) | more than 2 years ago | (#39606907)

What part of "the public is encouraged to contribute" did you not get?

Re:Seems rather limited to Intel. (1)

Amouth (879122) | more than 2 years ago | (#39606913)

While i will agree with you that it should include more than Intel.. it is a CPU database and doubt DSP's would be included do to the fact that are well, not CPU's..

But on the other hand the same type of setup for common types of IC's would be useful

Re:Seems rather limited to Intel. (0)

Anonymous Coward | more than 2 years ago | (#39607069)

DSPs certainly are CPUs - they're CPUs that are optimised for digital signal processing.

Re:Seems rather limited to Intel. (2)

njahnke (757694) | more than 2 years ago | (#39606943)

the wait is quite long at the moment (the site appears to be slashdotted) but the selection of manufacturers is excellent. i saw amd, hp, zilog, sun, dec ...

Re:Seems rather limited to Intel. (1, Troll)

marcosdumay (620877) | more than 2 years ago | (#39607623)

No, processors didn't exist before IBM. Unless you are counting mechanical ones.

Re:Seems rather limited to Intel. (0)

Anonymous Coward | more than 2 years ago | (#39607827)

The article is about *micro*processors. Intel's 4004 is, AFAIK, the first such.

Re:Seems rather limited to Intel. (1)

Eil (82413) | more than 2 years ago | (#39609183)

Processors did exist before Intel. IBM, Sperry, Amdal, Burroughs, DEC, Honeywell...

Yes, but they were (usually) designed for only one specific model of computer. And they weren't single-chip microprocessors. This CPU DB seems to be for the latter.

for an "IMDB" of processors, it really needs to include others - ARM, AMD (though that might be covered by the Intel) and still others exist.

I don't see ARM in here (probably because they don't mass produce their designs, they license them to other chip makers), but there is a section for AMD.

Improving software (2)

ISoldat53 (977164) | more than 2 years ago | (#39606833)

Nothing improves software performance like new hardware.

Re:Improving software (3, Insightful)

Anonymous Coward | more than 2 years ago | (#39606859)

Software 'Improves' to use up all that performance gain BEFORE the next CPU Improvement appears.

i.e. Software Bloat uses up all the gains at a quicker rate than the H/W can give it.

Re:Improving software (4, Informative)

mikael (484) | more than 2 years ago | (#39606895)

It's deliberate. There was a podcast interview with some Microsoft engineers regarding future COM enhancements. They were waiting for the hardware to get faster and for memory to increase just so they could give every class member its own lock.

Re:Improving software (1)

symbolset (646467) | more than 2 years ago | (#39608329)

Intel giveth, Microsoft taketh away.

Much needed catalogging (4, Insightful)

Skywings (943119) | more than 2 years ago | (#39606853)

Technology, as it is today, is all too fleeting. New technology is being pushed out at an ever increasing rate with the new products quickly supplanting the old. The old is then quickly forgotten. I applaud the effort of this group in its work to keep a living record of the heart of the machines that have been the core of most of lives for almost half a century.

On a slightly note, I believe we need better cataloging of technology in general as many old file are effectively being lost due the technology require to read them no long exist. Of course this raises further questions of how to maintain such cataloging as the cataloging infrastructure ages so that the data doesn't get lost. Oh what a vicious cycle it is.

Re:Much needed catalogging (0)

Anonymous Coward | more than 2 years ago | (#39607083)

On a slightly note, I believe we need better cataloging of technology in general [...]

I wonder what word you out.

Re:Much needed catalogging (1)

Skywings (943119) | more than 2 years ago | (#39608055)

I must confess, I never proof read my own writing and often I will think faster than I can type. The result is that I occasionally drop a word or two. What I meant to say there was "On a slightly different note." But I don't think the loss of that one word changes the meaning of the sentence a whole lot as that phrase was a bit superfluous anyway.

Re:Much needed catalogging (1)

LinuxIsGarbage (1658307) | more than 2 years ago | (#39607329)

Technology, as it is today, is all too fleeting. New technology is being pushed out at an ever increasing rate with the new products quickly supplanting the old. The old is then quickly forgotten. I applaud the effort of this group in its work to keep a living record of the heart of the machines that have been the core of most of lives for almost half a century.

In some ways, yes, in other ways no. A PC I bought in 1991 was horribly obsolete by 1995. A PC I bought in 2003 is still useful today, running Windows 7, etc. A PC I bought in 2006 is still useful today, without doing any upgrades. Crysis? No. But they're still more than adequate for surfing, word processing, Youtube, etc.

What I'm amazed at is the increase in complexity. In the 80s you'd see systems designed by one or two people (I'm thinking Woz and the Apple I and Apple II). Now you're seeing new systems (hardware and software) rolled out in short timeframes with thousands of people working on them, where no one person actually knows how the whole thing works.

On a slightly note, I believe we need better cataloging of technology in general as many old file are effectively being lost due the technology require to read them no long exist. Of course this raises further questions of how to maintain such cataloging as the cataloging infrastructure ages so that the data doesn't get lost. Oh what a vicious cycle it is.

Through the course of history, you've never been able to save everything.

Re:Much needed catalogging (1)

RobbieThe1st (1977364) | more than 2 years ago | (#39607491)

Of course... Thanks to open-source tools that support massive lists of file-types, I don't see this happening a lot. I'm sure there's some specific proprietary files that require one specific version of a program to read... But, thanks to emulators, even that isn't a problem so long as you can find /some/ commonality between emulated system and host for getting the data off.

Re:Much needed catalogging (1)

LinuxIsGarbage (1658307) | more than 2 years ago | (#39609239)

I was thinking of my proprietary closed sourced systems. Anything from the past 12-14 years can run WindowsXP, and things from the past 8 years can run Windows 7.

Re:Much needed catalogging (0)

Anonymous Coward | more than 2 years ago | (#39612335)

and things from the past 8 years can run Windows 7.

Total bollocks. Loads of machines from 3+ years ago do not have essential drivers that work on Win7. Having enough RAM and CPU speed is not much use if there are no drivers.

But of course, 8 year old PCs will typically run very well with current versions of Linux, with a choice of classic desktops (CENTOS, Gnome 2, 7 years support, or Mint 14, Cinamon/MATE 5 years support) or if you prefer something 'modern' there's Unity (Ubuntu 12.04, 5 years support).

so the 'liberal art' of library science (1)

decora (1710862) | more than 2 years ago | (#39607339)

is not to be shat upon after all? because i thought to be a 'real IT guy' i had to make "witty" comments about "you want fries with that" directed at anyone who had a degree that did not come from the college of engineering.

Re:Much needed catalogging (1)

hairyfeet (841228) | more than 2 years ago | (#39609985)

Which is why we need a "use it or lose it" clause to copyrights and patents. Anybody who likes classic PCs games can tell you we are about to lose an entire generation of games, as more and more of the Win9X era games become unplayable. Its sad that I can play the DOS games all day long but more and more of the Win9X era games are becoming completely unplayable due to the fact they used hacks to get more performance and/or old DRM crap that no longer functions.

What we need is someone to do for Win9X what the DOSBox guys did for DOS, give us an emulator that can recreate what would be a perfect gaming PC of the time, something like a 1GHz P3 with a Geforce 4 or something similar. Kinda sad that so many games i enjoyed from that era are gone now, it doesn't matter how much CPU I have as even with a VM they just won't run correctly.

But I think you are wrong on the tech being fleeting, at least in x86. the whole reason why MSFT is about to throw Windows under a bus with win 8 to try to get into the ARM based phone market is the simple fact that PCs have become "good enough" for what the masses do with them and have been good enough for several years now. What is the point of replacing that Athlon X2 or Pentium D if all they do is FB, webmail, and watch YouTube? Frankly the only reason a lot of my customers upgraded at all was because XP is getting close to EOL and they decided they might as well have me build them a new one rather than just buy Win 7.

I'm guessing within 5 years, maybe less, we'll see the same thing happen with ARM, as the chips get 'good enough" for the masses and it makes less and less sense to upgrade. Hell I bet that many wouldn't be upgrading now if the networks didn't offer smartphones like free candy to get them to sign. Let the rare earth metals go up so the networks won't eat the costs anymore and i bet a lot of folks will just stay with what they have.

While i'm sure the chip manufacturers will keep coming out with new designs i rapidly see a day coming where the only time you replace is when the previous one dies. There just haven't been any "killer apps" that would force a consumer wide upgrade in quite awhile and considering how much toxic material these things end up putting in the landfills i have to say i'm looking forward to that day myself.

Improvements in debugging? (2)

sillivalley (411349) | more than 2 years ago | (#39607021)

Hardware is much better, but software?

We're still using

print "I got to #1" \ print "I got to #2" for debugging!

Re:Improvements in debugging? (1)

Surt (22457) | more than 2 years ago | (#39607065)

Inside of a conditional breakpoint set in a gui debugger of course.

Re:Improvements in debugging? (1)

WhatAreYouDoingHere (2458602) | more than 2 years ago | (#39608301)

We're still using print "I got to #1" \ print "I got to #2" for debugging!

If those are your actual debug code instructions, your problems may run deeper than you think.

:)

Re:Improvements in debugging? (0)

Anonymous Coward | more than 2 years ago | (#39608471)

You assume he has a debugger to run.

In many embedded platforms there is no way to debug. None 0 zip nada zilch. When I first started doing embedded programing I remarked to a fellow programmer "I feel like spock and I am in 1945 working with bearskins and stone knives"....

Why 740? (0)

TeknoHog (164938) | more than 2 years ago | (#39607409)

640 kHz should be enough for everyone. Besides, one of my favourite data-processing chips is the 741, so it must be one better.

Slashdotted. (2)

sidthegeek (626567) | more than 2 years ago | (#39607717)

Whatever CPUs they were using are melted now.

Turns out RISC is optimal (1, Interesting)

unixisc (2429386) | more than 2 years ago | (#39607737)

Well, CPUs started off mainly as CISC, and after realizing that not all modes of operations are really needed if higher level languages are used, they migrated that to RISC. In RISC, as parallelism concepts kept gaining milege, they tried dumping more of the functionality to the compiler in the form of VLIW and EPIC architectures, but the ROI was simply not there, as Itanic showed. The tragedy of the Itanic's introduction was that it saw to the demise of far superior and well established CPUs, such as PA-RISC and Alpha: yet in terms of market acceptance, the only OSs that embraced it were HP/UX, FreeBSD and Debian Linux.

Also, once concepts like multiple threading and parallelism - long there in Unixes from Solaris to Dynix - started taking hold in NT based OSs like XP and beyond, turned out that even better than VLIW was multiprocessing, or dumping more cores @ that problem. Actually, even that solution shows diminishing returns after 4 CPUs - you can keep throwing cores @ it, but the performance improvements will be minimal. Ideal solution is to have as RISC-like a CPU as possible, and then have 4 cores of it in a CPU set-up, and one is off to the races.

Right now, x86 still has to support 32-bit modes, but once it's no longer needed, x64 will be a purely RISC CPU. At which point, performance improvements will undergo a quantum leap. Of course, for general purpose usage, todays processors are more than adequate, so what might happen is that it would be an opportunity to provide the same performance w/ lower power consumption.

Re:Turns out RISC is optimal (0)

Anonymous Coward | more than 2 years ago | (#39608159)

Purely RISC would be bad. RISC is great for deterministic systems, but strict RISC does not allow for SIMD or Fused operations, which are a requirement to hardware accelerate common expensive operations.

Re:Turns out RISC is optimal (4, Funny)

TeknoHog (164938) | more than 2 years ago | (#39608275)

Actually, even that solution shows diminishing returns after 4 CPUs - you can keep throwing cores @ it, but the performance improvements will be minimal. Ideal solution is to have as RISC-like a CPU as possible, and then have 4 cores of it in a CPU set-up, and one is off to the races.

Right now, x86 still has to support 32-bit modes, but once it's no longer needed, x64 will be a purely RISC CPU.

Can I have some of what you're smoking? Also, your usage of "@" is so very cyber, in a 90s way.

(I have an x86-64 machine with 4 CPUs running in 64-bit mode, meaning its ISA has magically changed from CISC to RISC. However, I'm posting this on a PowerPC running Gentoo, so as not to contaminate the message with any CISCy remains.)

Re:Turns out RISC is optimal (1)

unixisc (2429386) | more than 2 years ago | (#39610267)

As long as x86 CPUs continue to support 32-bit, they'll remain CISC, since there are instructions there that require multiple clock cycles to complete. However, if a CPU came out that no longer supported those modes, then the new 64-bit only CPU would be RISC.

I didn't get what you're trying to argue - that it won't be RISC, or that performance will keep increasing linearly or exponentially as one keeps throwing cores at it? (While I don't use sms abbreviations, I do make use of things like '&', '@' since those are by no means indecipherable)

Re:Turns out RISC is optimal (1)

TeknoHog (164938) | more than 2 years ago | (#39610995)

As long as x86 CPUs continue to support 32-bit, they'll remain CISC, since there are instructions there that require multiple clock cycles to complete. However, if a CPU came out that no longer supported those modes, then the new 64-bit only CPU would be RISC.

x86-64 is CISC, not RISC. The processors are internally RISC, but the x86 instruction set is Complex. In fact it seems that x86-64 is even more CISC, because they keep inventing new specialized instructions that were not there in any 32-bit x86 CPU.

Perhaps you are confusing this with the fact that there are other 64-bit architectures, most of which are RISC?

I didn't get what you're trying to argue - that it won't be RISC, or that performance will keep increasing linearly or exponentially as one keeps throwing cores at it? (While I don't use sms abbreviations, I do make use of things like '&', '@' since those are by no means indecipherable)

Scaling for a bigger number of cores depends entirely on the application you're running. You cannot generalize these things. I think I know what I'm talking about, having written and run software on a cluster at CERN and the Finnish supercomputing center.

(Sorry about the @ remark, that was uncalled for -- but your post did seem a little like a troll.)

Re:Turns out RISC is optimal (1)

unixisc (2429386) | more than 2 years ago | (#39611105)

From what I understand, all the 64-bit mode instructions complete in a single cycle, not multiple. Previously, I too was under the impression that they were CISC, but looks like when AMD extended the instruction set, they didn't make the instructions that take multiple clock cycles 64-bit. In fact, the term RISC, which stands for Reduced Instruction Set, is somewhat misleading given that many RISC architectures - particularly PPC - have a huge number of instructions. All it is is that every instruction has to complete within a single clock cycle, so that a program counter wouldn't need to have circuitry encoding the number of cycles needed for each type of instruction.

I accept your other point - that scaling for a higher #cores (that would be number of cores) would depend on the application.

Re:Turns out RISC is optimal (0)

Anonymous Coward | more than 2 years ago | (#39611645)

Uh what? Did you get stuck in the 60s?
Almost nothing executes in a single cycle (pipelining is used extensively otherwise you don't get anywhere close to GHz speeds, so if you used that criteria not even the high-end MIPS and ARM implementations would be RISC anymore), but anyway that is only a property of the physical implementation and not part of the instruction set and not exposed to the programmer - not in any current instruction sets anyway.
And there are instructions (like changing the rounding mode for SSE instructions) that take around 200 cycles on Intel's first 64 bit CPUs.
AMD probably did get rid of some of the "Vector Path" instructions since those were the kind nobody reallt used, but I think AMD doesn't use the "Direct Path" and "Vector Path" style implementation anymore anyway.

Re:Turns out RISC is optimal (1)

sconeu (64226) | more than 2 years ago | (#39610189)

NonStop Guardian also embraced Itanium.

Re:Turns out RISC is optimal (0)

unixisc (2429386) | more than 2 years ago | (#39610277)

I actually wasn't counting dead architectures, like NonStop or VMS. Anybody legacy still on those systems should have remained on MIPS/Alpha, and then worked out a migration plan to Windows Server, IBM OSs or Unix. It doesn't make sense for new buyers of Itanium - in case they exist - to embrace the above - they'd be better off w/ HP/UX, FreeBSD or Debian.

contributors out there? (0)

Anonymous Coward | more than 2 years ago | (#39608081)

One thing that is painfully missing from the CPUdb is accurate energy measurements while running benchmarks. If anyone has a vintage machine they could do the following: compile and run spec and measure the energy consumption (a plot of power vs. time will suffice here). Obviously need to measure power to the CPU, not wall power.

The point is to measure both raw performance and energy efficiency, i.e. given that in today's processes power is the performance limiting constraint.

Further notes:
The CPUdb mainly captures the CMOS era (386 to present day). It does have a variety of manufacturers -- not just Intel. Intel happens to be very prolific, especially coming through the 2000s.

Re:contributors out there? (1)

FrankSchwab (675585) | more than 2 years ago | (#39608299)

Obviously need to measure power to the CPU, not wall power.

That'll be quite difficult on the older, non-microprocessor based machines (say, the 1980 vintage minicomputers) where the "CPU" was a 6u or 7u drawer in a rack. /frank

Why not partner rather than reinvent from scratch? (3, Informative)

macraig (621737) | more than 2 years ago | (#39608245)

I can't even look at what Stanford is trying to do right now, but there have existed for years at least two online CPU "museums" that serve this goal. The one that readily springs easiest to mind - the one I've used most myself - is CPU-World [cpu-world.com] . It has extensive coverage of all the major CPU lineages, including photos submitted by users, and even includes some non-CPU silicon. It seems to be largely the creation of one guy, Gennadiy Shvets, with eager collaboration from a lot of fellow enthusiasts, and there seems to be no profit motive to the site that I've ever noticed. He even thanks the most prolific contributors by name.

WHY would Stanford feel it was necessary to "divide and conquer" this enthusiasm by creating an entirely new site and museum, rather than focusing the collective interest by contributing to or partnering with the one(s) that have already existed for many years? On the face of it this effort looks like either ignorance or pointless competition.

Re:Why not partner rather than reinvent from scrat (1)

Relayman (1068986) | more than 2 years ago | (#39610181)

I couldn't find any reference to IBM processors in the CPU World database even though IBM has been a major player for many years.

Re:Why not partner rather than reinvent from scrat (1)

macraig (621737) | more than 2 years ago | (#39610561)

The guy who founded that site and the others who contribute are hobbyists and collectors, meaning that they have available to share what has interested them personally. If you're an aficionado of IBM processors - and apparently you'd be in a minority - then contribute what you have and know. That's way the process works to the benefit of everyone. Perhaps others will see what you contribute and also become fascinated with IBM's processors.

Re:Why not partner rather than reinvent from scrat (1)

fatphil (181876) | more than 2 years ago | (#39611439)

I wasn't very impressed with CPU-World's coverage of the POWER architecture. Or Alpha. Or HP-PA. Or Sparc. Or MIPS. Or ...

Just because someone's done a good job with the x86 architecture, doesn't mean they've done much more than scratch the surface.

Re:Why not partner rather than reinvent from scrat (1)

macraig (621737) | more than 2 years ago | (#39611463)

Cut and paste from my response to the other get-off-my-lawn griper:

The guy who founded that site and the others who contribute are hobbyists and collectors, meaning that they have available to share what has interested them personally. If you're an aficionado of IBM processors - and apparently you'd be in a minority - then contribute what you have and know. That's way the process works to the benefit of everyone. Perhaps others will see what you contribute and also become fascinated with IBM's processors.

No 80186? (0)

Anonymous Coward | more than 2 years ago | (#39608337)

I don't see the 80186 [wikipedia.org] listed, even though it was in production all the way up to 2007.

storage (0)

Anonymous Coward | more than 2 years ago | (#39608351)

I'd like to see the same for hard drives.

Should be useful (1)

gman003 (1693318) | more than 2 years ago | (#39608495)

For a rather weird video game I'm making (will post it on /. as soon as it's ready), I compiled a list of literally thousands of processors. As of now, I believe I have every x86 processor (Intel, AMD, Via, Centaur, NSC, etc., from the 8086 to Ivy Bridge), every Itanium, quite a few POWER and SPARC chips and a handful of MIPS.

I'd like to contribute it - it has some factual errors, such as where I couldn't find actual prices and had to guess, and it has some less relevant stuff, like what integrated GPU, if any, it has. But hey, it's already several thousand processors, that's got to be a good start.

And, if this CPU database starts growing, I'm excited because it will make adding the *rest* of the processors easier. ARM in particular is hard to find full, definite lists of, because it's a licensed architecture instead of fabricated.

No 6502? (1)

atari2600a (1892574) | more than 2 years ago | (#39609417)

What the fuck are they smoking!?

Cool idea but a little barren. (1)

ogdenk (712300) | more than 2 years ago | (#39609783)

Seems a little barren at the moment. I can see several important microprocessors missing from the early days that would be fun to compare.....

MOS/WDC 6502/65C02/65816 - How could they *NOT* have a freaking 6502 in there?! Pretty sure the 6502 outsold the 8080!

MicroPDP-11 - J11?

MicroVAX - CVAX, NVAX, PVAX, etc

RCA 1802 - Still a couple floating around a few million miles away. Probably still working.

At least Alpha and SPARC are in there. This is definitely a cool effort. Will likely end up pretty complete one day.

Missing the Fastest Microprocessor! (1)

BBCWatcher (900486) | more than 2 years ago | (#39610005)

None of the IBM z/Architecture microprocessors (or their ESA/390 and prior predecessors) are listed yet. So Stanford is only missing the highest clock speed CPU ever created in the entire history of computing to date -- the IBM z196 [ibm.com] microprocessor. Which seems like a rather serious and obvious omission. Also a bit insulting, since IBM has been announcing their new z/Architecture microprocessor breakthroughs exclusively first at Stanford's own "Hot Chips" conference for several years now. (Ooops.)

Re:Missing the Fastest Microprocessor! (2)

Relayman (1068986) | more than 2 years ago | (#39610199)

I think if you dig you will find that the microprocessors in the z196 are Power7 processors, similar if not exactly the same as are used in the Power Systems. You are right in that they have possibly the highest clock speeds. Of course, Power7 processors are included in the Stanford database.

Can't the AMD Geode... (0)

Anonymous Coward | more than 2 years ago | (#39610485)

does it have another name?

MIPS & SGI (2)

unixisc (2429386) | more than 2 years ago | (#39610521)

One thing that struck me - within the MIPS family of processors, everything up to R8000 was listed under MIPS, while R12000 and R14000 was listed under SGI. No mention of R10000

That strikes me as curious. Did SGI keep the R1x000 CPUs to itself when it spun off MIPS? B'cos when MIPS/SGI switched from the superpipelined R4x00 to the superscalar R8000 & R10000, MIPS was very much a part of SGI. Only that afaik, the R8000 and R1x000 never got used outside SGI. So I'd think that in the MIPS family, everything up to R5000 would be w/ MIPS, and the ones above it are now dead, since SGI itself switched from R1x000/Irix to Itanium/Linux.

Numbers (1)

SuperRoach (1692180) | more than 2 years ago | (#39611751)

Instead of relying on personal experience, try looking up your CPU from history in here: http://www.cpubenchmark.net/cpu_list.php [cpubenchmark.net] The cpus date back to an Celeron 600mhz (Probably one of those Slot A types) up to current. Using the numbers from that link also, that cpu gets a "passmark" of 103. A current i7 3930k gets 13567. In other words, it is over 131 times faster.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?