Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The CPU Redefined: AMD Torrenze and Intel CSI

Hemos posted more than 7 years ago | from the i-believe-the-children-are-our-future dept.

AMD 200

janp writes "In the near future the Central Processing Unit (CPU) will not be as central anymore. AMD has announced the Torrenza platform that revives the concept of co-processors. Intel is also taking steps in this direction with the announcement of the CSI. With these technologies in the future we can put special chips (GPU's, APU's, etc. etc.) directly on the motherboard in a special socket. Hardware.Info has published a clear introduction to AMD Torrenza and Intel CSI and sneak peaks into the future of processors."

cancel ×

200 comments

Sorry! There are no comments related to the filter you selected.

huh? (3, Insightful)

mastershake_phd (1050150) | more than 7 years ago | (#18236200)

Werent the first co-processors FPUs. Arent they now integrated into the CPU? By having all these thing sin one chip they will have much lower latency with communicating between themselves. I think all in one multi-core chips is the future if you ask me.

Re:huh? (3, Interesting)

Chrisq (894406) | more than 7 years ago | (#18236258)

I think it has to do with the number of configuration options. Even if technology was able to fabricate one super chip with the best possible GPU and sound processor might be great for some people, but others would be better off with extra general purpose cores, cache, etc. The flexibility of "mix and match" probably outweigh the advantages of having the separate components on a single chip

Re:huh? (1)

mastershake_phd (1050150) | more than 7 years ago | (#18236302)

Some people would like to be able to customize (especially sound cards), but mass producing a "super chip" would be more cost effective. You could of course have different versions of "super chips".

Re:huh? (2, Informative)

dosquatch (924618) | more than 7 years ago | (#18236688)

You could of course have different versions of "super chips".

Thereby decreasing their cost effectiveness. 'Tis a viscious circle.

Re:huh? (5, Insightful)

Fordiman (689627) | more than 7 years ago | (#18237138)

But think. There is definitely money in non-upgradable computers - especially in the office desktop market. The cheaper the all-in-one solution, the more often the customer will upgrade the whole shebang.

Example: in my workplace, we have nice-ass Dells which do almost nothing and store all their data on a massive SAN. They're 2.6GHz beasts with a gig of ram, a 160G HD, and a SWEET ATI vid card each. Now, while I personally make use of it all proper-like, most people here could get along with a 1GHZ/512MRAM/16GHD/Onboard video system.

I think Intel/AMD stands to make a lot of money if they were to build an all-in-one-chip computer, ie: CPU, RAM, Video, Sound, Network, and a generous flash drive on a single chip.

Re:huh? (1)

PrescriptionWarning (932687) | more than 7 years ago | (#18237576)

so long as each of those things can be replaced easily, or if the all-in-one as a whole is itself fairly cheap and easy to replace, then it could be a good idea. The only thing I'd have a problem with is the case where the whole bundle costs 500 bucks or more, and when a single item inside breaks a year later, you have to pay 500 bucks again.

Re:huh? (3, Insightful)

Archangel Michael (180766) | more than 7 years ago | (#18237944)

"most people here could get along with a 1GHZ/512MRAM/16GHD/Onboard video system."

Haven't tried to run Vista yet ... have you.

Re:huh? (0)

drinkypoo (153816) | more than 7 years ago | (#18238072)

Haven't tried to run Vista yet ... have you.

No, and neither has anyone else, so his point still stands.

Re:huh? (4, Interesting)

MrFlibbs (945469) | more than 7 years ago | (#18236318)

The CPUs will still be multi-core. They will also integrate as many features as makes sense. However, there are limits on how big the die can be and remain feasible for high volume manufacturing. Using an external co-processor is both more flexible and more powerful.

The interesting thing about this whole co-processor approach is that the same interface used to connect multiple CPUs to each other is being opened up for other processing devices. This makes it possible to mix and match cores as desired. For example, you could build a mesh of multi-core CPUs in a more "normal" configuration, or you could mate each CPU with a DSP-like number cruncher and make a special purpose "supercomputer". It will interesting to see what types of compute beasts will emerge from this.

Re:huh? (4, Insightful)

*weasel (174362) | more than 7 years ago | (#18236756)

However, there are limits on how big the die can be and remain feasible for high volume manufacturing.

The limits aren't such a big deal.
Quad-core processors are already rolling off the lines and user demand for them doesn't really exist.
They could easily throw together a 2xCPU/1xGPU/1xDSP configuration at similar complexity.
And the market would actually care about that chip.

Re:huh? (2, Informative)

sjwaste (780063) | more than 7 years ago | (#18237416)

Intel's quad cores are, and they're actually two Core2 dies connected together I believe. "Native" quad core is in the works by AMD and Intel, but is currently not on the consumer market.

Now, if there are other CPU's out there doing native quad core for general purpose computing, I'm unaware and withdraw my ignorance if so :)

Re:huh? (5, Interesting)

Mr2cents (323101) | more than 7 years ago | (#18236352)

Adapting another quote: "If you want to create a better computer, you'll you'll end up with an Amiga". It's more or less what they're describing here. Amiga made heavy use of coprocessors back in the days. It could do some quite heavy stuff (well, at the time), while the CPU usage stayed below 10%.

One cool thing I discovered while I was learning to program was that you could make one of the coprocessors interrupt when the electon beam of the monitor was at a certain position. Pretty nifty.

BTW, for those who are too young/old to remember, those were the days of dos, and friends of mine were bragging with their 16 color EGA cards. Amiga had 4096 colors at the time.

Re:huh? (1)

clickclickdrone (964164) | more than 7 years ago | (#18236452)

you could make one of the coprocessors interrupt when the electon beam of the monitor was at a certain position
The Atari 800 could do that easily at the scan line level with Display Line Interrupts and somewhat harder with cycle counting at points across the line. And that was 1978 technology..

Re:huh? (1)

badfish99 (826052) | more than 7 years ago | (#18236800)

On the original IBM PC with a CGA adapter, you had to wait until the vertical flyback interval before updating the video memory. Otherwise the hardware couldn't keep up with sending data to the monitor (or something), and the monitor displayed snow.

Re:huh? (1)

tomstdenis (446163) | more than 7 years ago | (#18236974)

Um, you'll see noise on any array display if you write to the memory while it is drawing. It may not always show up as noise, it could show up as unsynced portions of the display (re: looking sucktastic).

Tom

Re:huh? (1)

cHALiTO (101461) | more than 7 years ago | (#18236512)

I agree the Amiga was a great piece of hardware, but the palette was 4096 colors, It could actually use 32 of them simultaneously on screen (at least the amiga 500, the amiga 2000 could go up to 4096 colors on-screen in HAM6).
EGA displays could use 16 out of 64 if I'm not mistaken.

Ahh those were the days :)

Re:huh? (3, Informative)

thygrrr (765730) | more than 7 years ago | (#18236918)

Nope, the A500 also had 4096 colours in HAM mode. They were basically the same hardware, except the A2000 had different - and more - expansion slots and was a desktop machine while the A500 was a typical home computer/console kind of thingy.

Re:huh? (5, Interesting)

walt-sjc (145127) | more than 7 years ago | (#18236572)

Ahh - the Amiga. My favorite machine during that era. I got my A1000 the first day it was available. Modern OS's could still learn a lot from that 20 year old OS. Why oh why are we still using "DOS Compatible" hardware????

Amiga had 4096 colors at the time.

Better put "4096" with a "*" qualifier. You couldn't assign each pixel an exact color - the scheme got you more colors by being able to set a bit that said that the next pixel modifies the previous pixel by "x". In this way, they could get more colors using less memory than traditional X bits per color per pixel schemes (Amiga was a bitplane architecture.)

Anyway, back on topic, I wish that the CPU manufacturers could finally come up with a "generational" standard socket. A well-designed module socket should last as long as an expansion slot standard (ISA,PCI,PCIe) and not change for damn near every model of chip. I should be able to go out and get a one, 2, 4, 8 socket motherboard, and stick any CPU / GPU / DSP module into it I want. Can we please finally shitcan the 1980's motherboard designs?

Re:huh? (1)

Ngarrang (1023425) | more than 7 years ago | (#18237640)

Why oh why are we still using "DOS Compatible" hardware????
Because DOS is still commonly used, far more than most would want to believe. Many boot diskettes are still DOS. A lot of manufacturing equipment still uses DOS. In a world where it seems like all computers are multi-GHz, there are embedded computers in the machines that run our world that are not.

Re:huh? (1)

CastrTroy (595695) | more than 7 years ago | (#18237844)

The problem is that there's a lot of other stuff that has to understand the CPU for the computer to work properly. You'd need to be able to snap in different chipsets so that the CPU could actually work. You'd probably want to be able to plug in new ram too. As processors get faster and faster, they require more pins. No more 80 pin CPUS for us. If they designed a universal CPU slot today, it would either have twice as many pins as we needed, or we'd run out of pins within 2 years.

Re:huh? (1)

tmach (886393) | more than 7 years ago | (#18238044)

Sounds like we're going BACK to 1980's motherboard designs. Heck, before the Amiga had this setup there was the good ol' C=64. Can we name the other chips SID and VIC just for old times' sake? Now we just need to put the OS on a flashable chip with its own dedicated swap RAM and we'd really be back to the days of "instant on". That'd be sweet!

Re:huh? (2, Interesting)

rbanffy (584143) | more than 7 years ago | (#18238202)

OK... Let's rephrase that:

Folks with 16-bit PCs were bragging about their 16 out of 64 color EGA cards and single-tasking OSs when even the simplest the Amigas had 32-bit processors, 32 out of 4096 colors, PCM audio and a fully multi-tasking OS coupled with a GUI.

As for the "processor socket", there are people selling computers that go into passive backplanes. If you put the CPU and memory in a card, there is little reason why you would have to upgrade the rest of the computer when you change the CPU (you would have to scrap the card, anyway, but processors are intimately related to chipsets, so, it is to be expected.

Thete are some SoC (system on a chip) solutions out there too. Those incorporate the chipset (or most of it) into the CPU, so, it would be easier to build a trans-generational socket

Re:huh? (3, Interesting)

evilviper (135110) | more than 7 years ago | (#18237458)

"If you want to create a better computer, you'll you'll end up with an Amiga". It's more or less what they're describing here.

That's what he's describing, but I don't believe for a second that's what it's going to be...

I don't believe for a second practically ANYONE is going to buy an expensive, multi-socket motherboard, just so they can have higher-speed access to their soundcard... Ditto for a "physics" unit.

This exists solely because CPUs are terrible at the same kinds of calculations ASICs/FPGAs are incredible at. That will be the only killer app here.

Video cards are a good example on their own. CPUs are so bad, and GPUs are so good, that transferring huge amounts of raw data over a slow bus (AGP/PCIe) still puts you far ahead of trying to get the CPU to process it directly. And it works so well, the video card companies are making it easier to write programs to run on the GPU.

And GPUs aren't remotely the only case of this. MPEG capture/compression cards, Crypto cards, etc. have been popular for a very long time, because ASICs are extremely fast with those operations, which are extremely slow on CPUs.

The situation is much more like x87 math co-processors of years past, than it is like the Amiga, with independent processors for everything.

It is likely that, in time, integrating a popular subset of ASIC functions into the CPU will become practical, and then our high-end video cards will be simple $10 boards, just grabbing the already-processed data sent by the chip, and outputting it to whatever display.

Then maybe AMD and Intel will finally focus on the problem of interrupts...

Re:huh? (4, Interesting)

TheThiefMaster (992038) | more than 7 years ago | (#18236380)

It's a cost and feasibility thing. The original FPUs were separate because they were expensive, not everyone needed them, and it was impractical to integrate them into the cpu because it would make the die too large and result in large numbers of failed chips. They became part of the chip later once the design was refined and scaled down.

The same applies to trying to integrate GPUs into the CPU, at the moment a top-end GPU is too large and expensive to integrate, and not everyone needs one. The move to having a GPU in a CPU socket should cut a lot of cost because the GPU manufacturers won't have to create an add-in-card to go with the GPU, they can just design the chip to plug straight into a standardised socket.

At the same time low-end GPUs are small and cheap enough that they are being integrated into motherboards, integrating a basic GPU into the CPU seems like a good next move, and the major cpu manufacturers seem to agree. IIRC Via's smallest boards integrate a basic cpu, northbridge and gpu into one chip? AMD are definitely planning it with their aptly named "Fusion". *Checks wikipedia* Yeah, Via's is called "CoreFusion".

Still, you are right, all-in-one cpus are the future, we're just not quite there yet.

Re:huh? (1)

Jeff DeMaagd (2015) | more than 7 years ago | (#18236652)

I think all-in-one / system-on-a-chip have been around for a long time, but they just weren't popular because that meant a significant performance hit. They may become more common as the performance becomes "good enough" for most common tasks where a desktop or notebook computer would be unnecessary and overpowered. It hasn't been a very popular idea yet, I think in part because the cost difference wasn't much. The next mainstream computer platform just might be a phone though, I understand that a lot of people in SE Asia have been doing this for several years already.

Re:huh? (1)

walt-sjc (145127) | more than 7 years ago | (#18236816)

The next mainstream computer platform just might be a phone though

Smartphones will (IMHO) evolve to a wireless portable computing device that "oh yeah, it can make phone calls too," but the problem is still that the screen is still WAY to small, and user input still sucks. Maybe they will finally be able to make LCD-like glasses that really are high-resolution, and maybe they will come up with a neural interface so we can ditch the keyboard / mouse... But I don't see those things being practical within the next 10 years. I also see the CPU speed / memory / storage requirements continuing to increase. We may be able to get everything we will want in a hand-held in my lifetime, but I doubt it.

Re:huh? (0)

Anonymous Coward | more than 7 years ago | (#18237092)

Docking stations may then come back.

They were rather silly with laptops since they have large screen and keyboards, but if a smartphone could become as good as a standard desktop of today, then having a dock with a screen and keyboard attached would be an ideal model, since one of my biggest complaints with multiple computers is knowing where files are.

I have a phone dock for that already (1)

DrSkwid (118965) | more than 7 years ago | (#18237708)

It's called the LAN.
I turn on the WiFi and my phone is part of the internet.
It serves it's flash/ram/rom/sd via the 9p protocol.
My terminal can boot from it, if I wanted it to, yours could to if you were in my authentication server.
I can store encrypted data on it useless without TCP access the dock.

Re:huh? (1)

metalcoat (918779) | more than 7 years ago | (#18236790)

My only question is that will these be needing more heatsinks or fans? Seems like if you spread it out a small heatsink would only be required but I am not an Engineer.

Re:huh? (1)

TheThiefMaster (992038) | more than 7 years ago | (#18237260)

The way I see it is that the hybrid cpu+gpu chips would be about the same size and thermal output as a modern dual-core chip, and the gpu-in-a-cpu-socket would be about the same thermal power as a normal cpu for that socket, so would take the same heatsink.

So one is replacing one core of a dual-core cpu with a gpu and the other is replacing one cpu of a dual-cpu machine with a bigger gpu, with little change in power or cooling requirements in either case.

Re: huh? (5, Insightful)

Dolda2000 (759023) | more than 7 years ago | (#18237248)

Still, you are right, all-in-one cpus are the future, we're just not quite there yet.

Actually, no thank you. I've had enough problems ever since they started to integrate more and more peripherals on the motherboard. I'd be troubled if I'd have to choose between either a VMX-less, DDR3-capable chip with the GPU I wanted, a VMX- and DDR3-capable chip with a bad GPU, a VMX-capable but DDR2 chip with a good GPU, or a chip that has all three but an IO-APIC that isn't supported by Linux, or a chip that I could actually use but costs $500.


Instead of gaining those last 10% of performance, I'd prefer a modular architecture, thank you. Whatever is so terribly wrong with PCI-Express anyway?

Re:huh? (2, Informative)

mikael (484) | more than 7 years ago | (#18236722)

Werent the first co-processors FPUs. Arent they now integrated into the CPU?

The Intel 8086 had the Intel 8087 [wikipedia.org]
A whole collection of Intel FPU's is at Intel FPU's [cpu-collection.de]

TI's TMS34020 (a programmable 2D rasterisation chip), had the TMS34082 coprocessor (capable of vector/matrix operations)
(Some pictures here [amiga-hardware.com] . Up to four coprocessors could be used.

Now, both of these form the basis of a current day CPU and GPU (vertex/geometry/pixel shader units).

Re:huh? (1)

sconeu (64226) | more than 7 years ago | (#18237776)

Intel also had the 8089, which was a coprocessor for I/O. It's described (along with the 8087) in my vintage July 1981 8086 manual.

Re:huh? (2, Insightful)

Tim C (15259) | more than 7 years ago | (#18236742)

I think all in one multi-core chips is the future if you ask me.

Great, so now instead of spending a couple of hundred to upgrade just my CPU or just my GPU, I'll need to spend four, five, six hundred to upgrade both at once, along with a "S[ound]PU", physics chip, etc?

Never happen. Corporations aren't going to want to have to spend hundreds of pounds more on machines with built-in high-end stuff they don't want or need. At home, I want loads of RAM, processing power and a strong GPU. At work, I absolutely do not require the GPU - anything that can do 1600x1200 @ 32bpp and 60Hz for 2D is perfectly adequate.

Likewise, the chip builders aren't going to want to have to release these all-in-one chips in a myriad of options, for low/medium/high spec CPU/GPU/PPU/SPU/$fooPU, it simply won't be cost-effective.

It's lose-lose imho; you're either stuck buying things you don't want, or have a mind-boggling number of options to choose from (consumers/business) and support (manufacturers/OEMs/IT depts).

Re:huh? (0)

Anonymous Coward | more than 7 years ago | (#18237246)

The x87 FPU architecture is kind of interesting. I don't know what Intel was thinking, exactly, I can see 3 possibilities: a) 16bit FPs are kind of useless so make it an extra b) FPs aren't used that often, even on current chips there isn't a ton of FP operations in most cases so make it extra, c) I want to have 2 classes of users, cheap mass users and high dollar specialty users and this costly extra will differentiate them.


When Intel finally did integrate the x87, they still had "sx" and "dx" parts, basically the same part at different price points with different artificial limits. Things change of course but now we have x87 FP, MMX, SSE, SSE2, SSE3 instruction sets, all of which have very limited usefulness in general so I'm inclined to think Intel doesn't mind useless stuff so much as they want two price points, Core and Xeon (and itanium, unless its dead.)


It's not a bad idea to have coprocessors but it's not a great one. You have to code for them, basically anything you code to is going to get much more limited use. Basically I see only 2 markets right now for them, graphics and physics, all the DSP type stuff can be done much faster and more cheaply with SSE like instructions. I kind of see graphics petering out in the next 3 years, 3D will reach some state where everybody is more or less satisfied and the quality is more or less the same (similar to 2D acceleration) and at that point, just integrating it to the CPU makes a lot of sense. Physics engines on the other hand are a much more limited and specialized area. Maybe some kind of pro-grade audio component and maybe some sort of specialized video decompression component but it's still hard to see it working. Seems to me like another tool to price parts at different price points.

Re:huh? (1)

Ngarrang (1023425) | more than 7 years ago | (#18237476)

What is new is old, and what is old is new. For a while, the mainframe was declared extinct. Now, the mainframe is running Linux and is all the rage again. Same for boards. What is integrated becomes separate, becomes integrated again.

For low-end computers, a board that integrates the CPU, GPU, ATA, etc, makes sense for that segment. But, there is a market segment that wants to be able to upgrade their boards more easily. I would very much prefer a board with a separate GPU socket, if I knew that it was a standard being followed by the dominant video makers. Upgrades would be easier. Power and cooling planning would be simpler.

My vision of the perfect PC has very little integrated onto the board. Give me a board with 8 to 12 PCIe slots. Let ME determine what USB and Firewire will be installed. Let ME determine if my system will have a DB9 serial port or not. Let ME choose SATA or SAS. I do not like having to pay for things I do not need and will not use. For that matter, put my CPU and RAM on a PCIe card, as well. Then, I am not tied to AMD's or Intel's ever-changing socket plans.

HTX (1)

Joe The Dragon (967727) | more than 7 years ago | (#18237676)

HTX slots are better the pci-e ones and right now amd can make there desktop cpus driver 2 of them and it the 4x4 system you can have 2 chip set links and 2 htx slots.

OMG not again (0)

Anonymous Coward | more than 7 years ago | (#18238090)

Just what everyone needs: another spin-off of CSI.

CSI? (5, Funny)

BigBadBus (653823) | more than 7 years ago | (#18236208)

CSI? De-centralized CPU? Where will they be located; Miami, New York or Las Vegas?

Re:CSI? (0)

Anonymous Coward | more than 7 years ago | (#18236404)

They'll be located in all three locations, of course.

Re:CSI? (1)

BigBadBus (653823) | more than 7 years ago | (#18236456)

Maybe its the CSI Exclusion Principle ...

Re:CSI? (5, Funny)

91degrees (207121) | more than 7 years ago | (#18236430)

CSI? De-centralized CPU? Where will they be located; Miami, New York or Las Vegas?

Well, clearly, they won't. They're decentralised.

New on NBC, "CSI: Wherever". We even have a song by The Who for the opening credits - "Anyway, Anyhow, Anywhere".

AMD competes with... (5, Funny)

Comboman (895500) | more than 7 years ago | (#18237256)

AMD will compete by releasing "Law & Order: Central Processing Unit".

Re:CSI? (0)

MarkRose (820682) | more than 7 years ago | (#18236438)

Sounds like a Crappy, Stupid Idea to me.

What goes around....... (1)

trancemission (823050) | more than 7 years ago | (#18236210)

Shrugs at memory of 3 days attempting to install windows 95 on a 386... finally got there after removing the co-processor. I was young........Happy days........

Previous announcements (3, Informative)

G3ckoG33k (647276) | more than 7 years ago | (#18236224)

The first details emerged half a year ago:


IBM and Intel Corporation, with support from dozens of other companies, have developed a proposal to enhance PCI Express* technology to address the performance requirements of new usage models, such as visualization and extensible markup language (XML).

The proposal, codenamed "Geneseo," outlines enhancements that will enable faster connectivity between the processor -- the computer's brain -- and application accelerators, and improve the range of design options for hardware developers.


http://www.intel.com/pressroom/archive/releases/20 060927comp_a.htm [intel.com]

Re:Previous announcements (0)

Anonymous Coward | more than 7 years ago | (#18236394)

Since when is XML a new usage model requiring advances in processor design?

Are they going to redo the X86 instructions in XML? X86 XML ASM? The CTO will luv it!

<asm>
<data>
<db id="msg">
Hello World!
<br />
</db>
<equ id="len">
<value-of select="$msg" />
</equ>
</data>
<text>
<globals>
_start
</ globals>
_start
<mov><ebx>0x01</ebx></mov>
<mov ><ecx><value-of select="$msg" /></ecx></mov>
<mov><edx><value-of select="$len" /></edx></mov>
<mov><eax>0x04</eax></mov>
<int>0 x80</int>
 
<mov><ebx>0x00</ebx></mov>
<mov><eax >0x01</eax></mov>
<int>0x80</int>
</text>
</asm >

Re:Previous announcements (5, Funny)

badfish99 (826052) | more than 7 years ago | (#18236870)

Since when is XML a new usage model requiring advances in processor design?

Since it became bloatware that is capable of wasting 90% of the processing power of a modern computer.
</sarcasm>

Re:Previous announcements (1)

Fordiman (689627) | more than 7 years ago | (#18237280)

Oh my GOD that needs shot in the face right now! It's a markup language, not a programming language! ...

They should use Javascript instead!

*ducks*

Boring (0)

Anonymous Coward | more than 7 years ago | (#18236250)

Give me 8 core CPUs and then strap on DSP for Audio, Graphics, Physics and AI. While you're at it, do something original and innovative that will impress me.

Re:Boring (1)

andy_t_roo (912592) | more than 7 years ago | (#18236770)

well the way games are going you can get the first 2 easly, the 3rd if the programmers put in some effort to create a good engine and the 5th if you get lucky, but we now have 15 years of basically non-original game play that sais that the last 2 probably won't happen (except in the occasional game - one or 2 in the next few years)

Retro-innovation (5, Informative)

Don_dumb (927108) | more than 7 years ago | (#18236252)

Here spins the Wheel Of Reincarnation http://www.catb.org/~esr/jargon/html/W/wheel-of-re incarnation.html [catb.org] watch how everything comes back and then goes away again and then comes back . . .

Amiga had all processors on the main board (1)

CdXiminez (807199) | more than 7 years ago | (#18236296)

Are we finally getting back to actually complete computers like the Amiga?
It had custom designed processors for sound and video on the motherboard.
And then it was sold together with a fitting OS, so you got computer and software as a complete functioning machine in stead of many loose ends in a PC.

Re:Amiga had all processors on the main board (1)

Goaway (82658) | more than 7 years ago | (#18236382)

You want that, get a Mac.

Seriously, I did, and it's feeling just like the old days.

Re:Amiga had all processors on the main board (1)

CdXiminez (807199) | more than 7 years ago | (#18236464)

I did, at home, I'm just a bit frustrated with the PCs at work.

Re:Amiga had all processors on the main board (1)

walt-sjc (145127) | more than 7 years ago | (#18236890)

The problem is that the Mac's are all built like standard PC's now. If you replace the Apple firmware with a standard BIOS you could boot DOS. The Mac Mini is basically a standard notebook in a different form factor, ditto for the imac. The Mac pro is not much different in design than a Dell desktop. Why? Cost. They get to use standard parts / software. There is NOTHING on the market like the integrated design of the Amiga. The mac has a lot more in common with a 1982 IBM PC than an Amiga.

Re:Amiga had all processors on the main board (0)

Anonymous Coward | more than 7 years ago | (#18236924)

Or get a 1960-vintage IBM 360.

Re:Amiga had all processors on the main board (1)

mdwh2 (535323) | more than 7 years ago | (#18237252)

Or get a PC.

Really, if GPUs and sound chips are sufficient for a comparison to the Amiga's chipset, then PCs have been doing that for at least as long as Macs.

It's not clear to me why this article is about something more Amiga-like than what modern computers already have (especially since GPUs are fully programmable). The difference about this news is that the chips can be put on the motherboard via a standard socket - but it was never the case with the Amiga that you could plug in chips you wanted, you just had the entire chipset hardwired to the motherboard, no different to a chipset on a modern PC.

Re:Amiga had all processors on the main board (1)

Yvan256 (722131) | more than 7 years ago | (#18236386)

You mean like modern Macs have become? They have a CPU, a GPU, some audio chip (probably not a DSP but still). And the OS knows how to work with both the CPUs and the GPU.

Re:Amiga had all processors on the main board (1)

CdXiminez (807199) | more than 7 years ago | (#18236494)

I did :-)
I moved from Amiga 1200 to iMac in 1999. Never had a PC in the house (except, perhaps, the bridgeboard on the A2000, which, back then, made me wonder what all the fuss of PCs was about).

Re:Amiga had all processors on the main board (1)

clickclickdrone (964164) | more than 7 years ago | (#18236428)

Heck, this goes back to the Atari 800 series.
All this is really doing is bringing a more standardised set of co-processers on to the mobo rather than any number of 3rd party ones - it would make it much easier keeping the OS stable if you have a more controlled number of architectures to deal with.
On the downside, if these processors were DRM hobbled, it would make life harder too..

Re:Retro-innovation (1)

Delifisek (190943) | more than 7 years ago | (#18236358)

Amiga on chip ?
after so many new boards, cpu's, operating systems...
I'mean spending hell of money they found Amiga in cpu ?

In corporate america inventions OWNS YOU

Moore's Law (0)

Anonymous Coward | more than 7 years ago | (#18236324)

is/was an over-optimistic/egotistical load of crap

Interesting (2, Interesting)

Aladrin (926209) | more than 7 years ago | (#18236330)

I find the idea of multiple Processing Unit slots on the motherboard that can each take different type of chips to be very interesting. I'm not sure how well it will work, though. The article mentions 5 types that already exist: CPU, GPU, APU, PPU and AIPU. (Okay, the last doesn't exist yet, but company is working on it.) There's only 4 slots on that motherboard that's shown. I definitely do NOT want to see a situation where the common user is considering ripping out his AIPU for a while and using a PPU, then switching back later. I can only imagine the tech support nightmares that will cause.

So the options are to have more slots, or make something I like to call an 'interface card'. See, there'll be these slots on the motherboard that cards fit into... wait, don't we have this already?

And more slots isn't really an option because the computer would end up being massive with all the cooling fans and memory slots. (Which are apparently seperate for each PU.)

I kind of hope I get proven wrong on this one, but I don't think this is such a great idea. Just very interesting. Having 16 slots and being able to say you want 4 AIPUs, an APU, 4 GPUs, 3 PPUs, and 4 CPUs on my gaming rig and 1 GPU, 1 APU, and 14 CPUs on my work rig would be awesome.

Re:Interesting (2, Interesting)

eddy (18759) | more than 7 years ago | (#18236570)

Maybe if a motherboard featured a very large generic socket to which was attached one cooling solution, it'd work out better. Processing Units, which would be smaller as to fit as many as possible, would be able to go anywhere in this socket (in a grid-aligned fashion). Easiest solution, socket is X*X square grid, and all PUs must be say X/2 (or hopefully X/4) squares which can be arranged in any fashion. Plunk them in, reattach cooling over all of them, boot and enjoy that 4CPU, 2GPU, 2FPU configuration.

Separate sockets with separate cooling, which I assume is what we're about to see [more of], is going to get messy. And loud.

Maybe in the future some day we've have "Tetris Computing" where you have to puzzle to fit the PUs optimally in the socket. "Oh, I'd really like that nVAMD GPU eXTReMe 2010, but it's an L-piece, and I really need an S-piece for 'tetris' in my bottom half of the socket..." :-)

Re:Interesting (5, Interesting)

Overzeetop (214511) | more than 7 years ago | (#18236828)

You are correct - sockets are just a reincarnation of slots, but less flexible because you're limited to what you can put on a single chip instead of an entire card.

Perhaps the better thing to do would be better slot designs (not that we need more with all the PCI flavors floating around right now) with integrated, defined cooling channels. If you were to make the card spec with a box design rather than a flat card, you could have a non-connector end mate with a cooling trunk and use a squirrel cage (higher volume, quieter, more efficient)fan to ventilate the cards.

Re:Interesting (1)

walt-sjc (145127) | more than 7 years ago | (#18236952)

So basically have a passive backplane like industrial computers have been doing for YEARS, except that you allow multiple CPU boards. I like it.

Re:Interesting (1)

drinkypoo (153816) | more than 7 years ago | (#18238138)

Perhaps the better thing to do would be better slot designs (not that we need more with all the PCI flavors floating around right now) with integrated, defined cooling channels.

Adding a connector means you will have more noise. Using chips means they both have a shorter path and are electrically better connected.

Most solutions need only two things; a processor and memory. Everything else you see on the card is either there for I/O (video cards have RAMDACs for example, or whatever the chip that handles digital video is called) or there to interface to the bus. If you make the bus interface work directly with a chip, then you don't need most of that shit anyway.

Re:Interesting (1)

Joe The Dragon (967727) | more than 7 years ago | (#18237876)

look up HTX slots

Unification (1)

IPFreely (47576) | more than 7 years ago | (#18238100)

While there are a wide variety of co-processor options (or at least ideas) right now and few sockets into which to put them. I suspect the solution will more likely come in the form of unified co-processors rather than multiple sockets.

Mother board shipsets are becoming the union of a lot of functionality (Disk, Ethernet, Sound, UDB, PCI/e and graphics). Even though you can still get best of breed addin cards for many of these functions, the majority of desktop systems do just fine with what the chipset offers.

These coprocessors will also become unified. AMD and nVidia already are talking about doing physics. In the end, you are likely to get a single processor that does graphics, physics, AI, advanced math, and probably Java, sound and a few other things we have not thought of yet. As a single entity, it takes a single slot. So long as all the functionality is all accessable through some standardized interface (DirectX in the MS world, something else for everyone else) then the difference between the competing manufacturers will be about the same as the difference between graphics cards now.

Looks like CSI is continuation of Intel CSA bus (1, Informative)

timecop (16217) | more than 7 years ago | (#18236340)

Intel introduced something called 'CSA' bus (http://www.intel.com/design/network/events/idf/cs a.htm), which was higher bandwidth than PCI and was to be used for "streaming" devices like NICs and such. Making this 'general purpose' and user accessible was the next logical step. Go intel!

Amiga? (2, Insightful)

myspys (204685) | more than 7 years ago | (#18236402)

Am I the only one who thought "oh, they're reinventing the Amiga" while reading the summary?

Re:Amiga? (0)

Anonymous Coward | more than 7 years ago | (#18236506)

No, I thought it first and I already filed a patent on it.

Re:Amiga? (1)

DingerX (847589) | more than 7 years ago | (#18236520)

Yeah, you were.

The diehard Amiga fans were thinking, "This would really work well if the bus ran faster than any of the cores."

Re:Amiga? (1)

drinkypoo (153816) | more than 7 years ago | (#18238106)

Am I incorrect in believing that the original Amigas' buses ran at the processor clock rate? At least until PPC accelerators came out anyway?

No (-1)

Anonymous Coward | more than 7 years ago | (#18236552)

No Text

Re:Amiga? (1)

Easy2RememberNick (179395) | more than 7 years ago | (#18236730)

I was thinking about the Amiga too.

Re:Amiga? (1)

oh_my_080980980 (773867) | more than 7 years ago | (#18236970)

YES!!!

It's been done before and with great success. To bad it took 23 years for the rest of the industry to catch up!!

Thats great new's! (0, Offtopic)

dave420 (699308) | more than 7 years ago | (#18236420)

Is there an address we can send money to get /. editors a basic grammar textbook? I'm no pro, but that's just ridiculous.

Just plug in a spellchecker Co-Processor (2, Funny)

TrueKonrads (580974) | more than 7 years ago | (#18237202)

Just plug in a spellchecker Co-Processor! I think no ordinary CPU could handle such massive mistakes

Re:Thats great new's! (0, Offtopic)

Goaway (82658) | more than 7 years ago | (#18237282)

Slashdot "editors" do not "edit" the submissions. According to CmdrTaco, this makes Slashdot "more real".

Slashdot could benefit from a co-processor... (4, Funny)

Mad_Rain (674268) | more than 7 years ago | (#18236446)

that revives the concept op co-processors.

Slashdot's computers might benefit from a co-processor, the function of which is to monitor and correct spelling and grammar errors. It would serve like an editor's job, only better, because, you know, it might actually work.

(Bye-bye karma!)

Re:Slashdot could benefit from a co-processor... (1)

FreakyLefty (803946) | more than 7 years ago | (#18236856)

revives the concept op co-processors

It would also make proper use of no ops.
But they're giving us sneak peaks, so who are we to complain?

Amiga v2? (1)

BobLenon (67838) | more than 7 years ago | (#18236480)

How 'bout Agnus, Denise and Paula. :)

Where have you been, (1)

Dareth (47614) | more than 7 years ago | (#18237194)

That is Paula, Randy, and Simon...

I mean, do you ever watch American Idol, hello!

Re:Amiga v2? (1)

fellip_nectar (777092) | more than 7 years ago | (#18237492)

Or even ANTIC, CTIA and POKEY?

EOISNA (2, Insightful)

omega9 (138280) | more than 7 years ago | (#18236546)

Everything old is new again.

Re:EOISNA (1)

jojoba_oil (1071932) | more than 7 years ago | (#18237068)

Does that mean goth is the new emo? Or black is the new pink? O, what a horrible, horrible world.

Amiga anybody? (1)

plebeian (910665) | more than 7 years ago | (#18236640)

It is nice to see PC architecture has finally caught up with Amiga.

Re:Amiga anybody? (2, Funny)

Afecks (899057) | more than 7 years ago | (#18237586)

It is nice to see PC architecture has finally caught up with Amiga.

It's nice to see you've finally caught up with all the people that have made an Amiga comment.

great writing (0)

Anonymous Coward | more than 7 years ago | (#18236680)

The downside of this is that they aren't optimised for a specific task, hence their basically jack's of all trades but masters of none.

In the past the processor was the beating heart of the computer (hence the term Central Processing Unit) but with all the different developments in abovementioned areas the CPU is becoming a less determining factor for overall processing power within the PC.

That's interesting, because everyone in my parents' generation says the "CPU" is the whole box, and the chip from AMD or Intel is the "microprocessor".

CSI (1)

jlebrech (810586) | more than 7 years ago | (#18236684)

Is that the infinite image enhancement chip?

Sneak Peak? (1, Funny)

Anonymous Coward | more than 7 years ago | (#18236708)

Is this located near Stealthy Valley?

Good Idea (1)

wolff000 (447340) | more than 7 years ago | (#18236818)

I like the concept and yes I know it's nothing new. I hope this thing takes off I would love to be able to just snap a single chip into place than have to deal with gigantic video cards. ALthough I suppose we would end up with more heatsinks on the mobo this way but at least my PCI slots wouldn't be so crowded.

As rumored, first addopted by the porn industry (2, Funny)

alta (1263) | more than 7 years ago | (#18237028)

Prepare to see the pornprocessor soon. I'm not going to give a lot of details here, but it's optimized for specific physics, AI and Graphics.

Cell Clusters (3, Interesting)

Doc Ruby (173196) | more than 7 years ago | (#18237072)

How about the Cell uP [wikipedia.org] (first appearing in Playstation3), which embeds a Power core on silicon with a 1.6Tbps token ring connecting up to 8 (more later) "FPUs", extremely fast DSPs. IBM's got 4 of them on a single chip, connected by their "transparent, coherent" bus, a ring of token rings. One Cell can master a slave Cell, and IBM is already debugging 1024 DSP versions, transparently scalable by the compiler or the Power "ringmaster" at runtime.

These little bastards are inherently distributed computing: a microLAN of parallel processors, linkable in a microInternet.

Imagine a Beowulf cluster of those! No, really: a Beowulf cluster of Cells [google.com] .

can someone explain? (1)

ArcSecond (534786) | more than 7 years ago | (#18237108)

Is this the same as a bus-oriented system? I remember spec'ing out systems for a defence contractor back in the 90s, and there were systems designed around "daughter-card" processors, something like a modular mainframe on the cheap. It always seemed to me that a bus-centric system had a lot going for it performance-wise, rather than forcing everything in the computer to synch to the CPU.

AMIGA! (2, Insightful)

elrick_the_brave (160509) | more than 7 years ago | (#18237516)

This sounds vaguely like the Amiga platform of years past (with a fervent following today still)... how innovative to copy someone else!
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?