×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Fitting A Linux Box On A PCI Card

timothy posted more than 12 years ago | from the ultradense-to-the-desktop dept.

Hardware 137

An Anonymous Coward writes: "Running on Newsforge/Linux.com is a hardware review where Slashdot's Krow took a couple of OmniCluster's Slotservers and and built a cluster configuation inside of a singe host computer (and even had DB2 running on one of the card's inside of the host). Could something like this be the future of computing where for additional processing power you just kept adding additional computers inside of a host?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

137 comments

n+1 (-1, Redundant)

Anonymous Coward | more than 12 years ago | (#2516156)

n+1 th post!

Ob Beowulf comment (2)

anticypher (48312) | more than 12 years ago | (#2516161)

Imagine...

It would be cool to have completely separate processors in a box, so that as long as there is power, each card can run on its own. Then you could network them together into a beowulf cluster, and then make clusters of clusters

the AC

G4 processor cards (3, Interesting)

Peter Dyck (201979) | more than 12 years ago | (#2516207)

I've been wondering how expensive/difficult it would be to build a multiprocessor computer for computational physics applications based on G4 PowerPC cards [transtech-dsp.com].

I'd just love the idea of having a host PC (or a Beowulf cluster of them ;-) with all the PCI slots filled with G4 7400 boards crushing numbers...

Re:G4 processor cards (0)

Anonymous Coward | more than 12 years ago | (#2516361)

I remember that I saw some REALLY nice G3 or was it G4 cards... they were a little to expensive.. but really cool.
This crap then... eh.... National Geode cpu.. what a sick joke... my old sparc station 4 performs better than that crap....

Re:G4 processor cards (3, Insightful)

statusbar (314703) | more than 12 years ago | (#2516626)

'Crushing numbers' is the right term, as g4's altivec is only single precision.

But it would be cool.

--jeff

Re:Ob Beowulf comment (3, Interesting)

Khalid (31037) | more than 12 years ago | (#2516231)

There has been a project a while ago, which aim was to implement a beowulf of separate StrongArm cards to be plugged in a box, they have even managed to build some prototypes.

http://www.dnaco.net/~kragen/sa-beowulf/

Alas I think the project seems to be dead for some time now.

Hmmm (0, Redundant)

Anonymous Coward | more than 12 years ago | (#2516162)

Imagine a Beo... oh. Hang on.

Too bad... (1)

NoMoreNicksLeft (516230) | more than 12 years ago | (#2516170)

That slots are considered a bad thing nowdays. The trend is to manufacture boards with less expandability, not more. So let's see... Soundblaster 1024 Ultra, or another CPU board... but not both. Then again, I've never been accused of buying crappy consumer motherboards...

Re:Too bad... (0)

Anonymous Coward | more than 12 years ago | (#2516237)

I purpose that an arrogant filter should be developed to work in conjuction with the lameness filter, because there is truly nothing more lame than being an arrogant computer nerd.

Let me get this straight... (-1, Flamebait)

Anonymous Coward | more than 12 years ago | (#2516171)

The worst terrorist attack in recorded history occurred in September, and now we're involved in a WAR against Islam and you people have the gall to be discussing a PCI-card sized Linux box???? My *god*, people, GET SOME PRIORITIES!

The bodies of the thousands of innocent civilians who died (and will die) in these unprecedented events could give a good god damn about a PCI-card sized Linux box, your childish Lego models, your nerf toy guns and whining about the lack of a "fun" workplace, your Everquest/Diablo/D&D fixation, the latest Cowboy Bebop rerun, or any of the other ways you are "getting on with your life" (here's a hint: watching Cowboy Bebop in your jammies and eating a bowl of Shreddies is *not* "getting on with your life"). The souls of the victims are watching in horror as you people squander your finite, precious time on this earth playing video games!

You people disgust me!

Re:Let me get this straight... (0, Troll)

fshalor (133678) | more than 12 years ago | (#2516191)

1. We're NOT in a war against Islam.
2. If we think we're in a war against Islam, then we have lost.
3. We are in a war against Microsoft and almost all that she stands for.
4. This would be a major blow in the final battle of that war.
5. Any "linux" based application/hardware that takes even a miniscule share in any market is worth the time spent to develop it, as long as it's intentions are pure.
6. Without inventions like these (when can I get mine?) we regress into nothing.
7. (and last one.) The souls of the victims of every war and conflict in history would like you to get your information straight. And would like us to get on with our lives.

No one can tell anyone else who they don't know how to get on with our lives. I'm not telling you to get over it, am I?

And to struggle for that on-topic (1), think about the heat issue, folks. These cards have got to be smokkin'

-=fshalor

Re:Let me get this straight... (0)

Anonymous Coward | more than 12 years ago | (#2516339)

How does this guy do it...

YHBT. HAND.

Seen these for a long time (3, Insightful)

ackthpt (218170) | more than 12 years ago | (#2516179)

I've seen these around for ages, variety of manufacturers, but usually they're priced significantly higher than just buying several cheap PC's, granted you have a fast bus between cards/PC's, unless you have a redundant powersupply, one failure brings your whole cluster down, whereas networked mobos should be tolerant of one system failing. As for future, eh, they've been around long enough, but I expect the use has been rather specialized.

Re:Seen these for a long time (1)

pruneau (208454) | more than 12 years ago | (#2516211)

Well as for the single-point-of-failure for the host computer, you _are_ right.

Another problem of course is the PCI bus speed, as someone already mentionned : if you are using some !gb/s link between the machines, that will allow you to deliver data much faster.

But... wait ! if that's going through a PCI bus anyway...
Hey, can some hardware people invent a _true_ bus, because we _are_ lacking something there.

But that kind of solution might interest people wants to do more with less space... If they are ready to pay the price.

All in all I'm not sure it's that interesting. Do someone have some benchmarking about that ???

aka transputers (1)

wganz (113345) | more than 12 years ago | (#2516309)

Transputer advertisements were common in the back of the old Byte magazine. They were more popular in the UK than the US. With the newer low power consumption Transmeta & PowerPC CPUs + low RAM prices, this is more viable from a cost/power ratio now than then.

It is not like Transmeta has a shortage of Linux talent to help bring this off. If Transmeta makes such a product and puts an advertisement in something like Linux Journal with Linus's smiling face beside it, it will sale like the proverbial hot cakes. I would buy one with or without his picture.

Just a thought

Performance (0)

Anonymous Coward | more than 12 years ago | (#2516188)

I wonder what sort of performance you get out of one of these cards?
The processor is a:

x86 Compatible National Semicondutor pxGeode

The SETI version (3, Insightful)

Wire Tap (61370) | more than 12 years ago | (#2516189)

Does anyone here remember a while back when that "fake" company tried to sell us SETI @ Home PCI cards? I was about to place my order, until the word came to me that they were a fraud. Kind of a funny joke at the time, though. At any rate, here is the old /. story on it:

http://slashdot.org/article.pl?sid=00/07/23/2158 22 6&mode=thread

It would have been GREAT to have an improvement in CPU speed on a PCI card, as I always have at least two free in every system I own. What I wonder, though, is what instructional speed would the PCI card "CPUs" give us?

PCI card computers (3, Interesting)

hattig (47930) | more than 12 years ago | (#2516218)

You have to remember from a certain aspect, you can add a PCI card to a motherboard which made the motherboard the PCI slave.

PCI = PCI = PCI = CPU = PCI = PCI
I I I I
IDE CPU CPU CPU
I I I
USB PCIs PCIs
I I
IDE ..
I
USB

I have left out memory controllers, northbridge, etc, and modern fancy chip interconnects because they are just fluff (no, not fluffers, that is another industry). In the above diagram, what is the host CPU? Is there actually such a thing as a host? The PCI bus is arguably the center of a modern PC, with CPUs and controllers hanging off of it.

Modern motherboards are just a restriction on what you can do in reality. Reality is a PCI backplane on a case, maybe with a couple of PCI-PCI bridges. You can then add anything into any PCI card that you want - normal PCI cards, or CPUs (NB, Memory, CPU, etc).

That is why you can configure these cards to use the 'host' IDE drive. It is just a device on every 'computer' within the case...

I can't post a diagram though, because I must use "fewer junk characters". Bloody lameness filter - affects the real users, the people it is meant to trap just work around it. Would you call this a "lame post"?

Re:PCI card computers (0)

Anonymous Coward | more than 12 years ago | (#2516373)

you betcha, lamer.

Re:PCI card computers (0)

Anonymous Coward | more than 12 years ago | (#2516397)

well, lame in a karma whoring way

Re:PCI card computers (1)

castlan (255560) | more than 12 years ago | (#2516679)

At least for AMD based multiproc systems, the Northbridge seems to be the hub around which the CPUs are gathered. AFAIK, the RAM doesn't have a direct connection to either CPU, it has a dedcated bus to the Northbridge. Why is the Northbridge "fluff"? Isn't is the closest thing to a host on PC systems? It is what makes the "Motherboard" the mainboard. Where is the BIOS located? If it is part of the Northbridge, then that would close the argument for me. If it is discrete, then it is a good candidate for the center of a modern PC, if you allow that once a system is booted, "central" functions (like "basic Input and Output") can be migrated to other parts of the system.

For me, anyway, the PCI-PCI bridge seems to be a pretty good negation of the "PCI bus as host" viewpoint. If anything, the PCI bus is just an extension to the PCI controller, which would seem to fall under the "Northbridge chipset as host" perspective.

As we migrate from a single CPU paradyme to multiple CPU architectures, it seems the view of "Primary CPU controlling auxilliary cpus" is vestigal, and we will be moving away from it. This seems apparent if you follow the Locking mechanisms used by Linux migrating from large per-cpu locks, to finer grained locks. It is not very useful to have a CPU centric system when CPUs are commoditized. The Chipset seems to be the lowest common denominator for the forseeable future.

Re:PCI card computers (1)

hattig (47930) | more than 12 years ago | (#2516851)

It is fluff in the context of the fact that you can connect multiple Northbridges (with CPUs, memory, PCI-PCI bridge possibly) to a PCI bus, and the PCI bus will be fine.

The BIOS is located off an LPC device connected to the southbridge.

A modern PC is a subset of what a PC could be. As I said.

You can view a PC any way you like. But you can connect PPC computers on PCI cards to PCs, and they can access any resource on that PCI bus just like the host can. Because, it is simply another host on the PCI bus.

Hence, PCI backplanes work. PCI-PCI bridges are there so you can have more than 6 PCI slots!

Impractical (2, Troll)

atrowe (209484) | more than 12 years ago | (#2516201)

I don't see these things taking off for most uses because the PCI bus is limited to a measly 133 MB/S. Even newer 64 bit PCI slots found in some servers have insignifigant bandwidth to keep the data flowing fast enough to make full use of these things. I can see where they may come in handy for heavy number crunching applications such as SETI, but for web serving and DB applications, the throughput between the card and the host system is simply unacceptable.

Also, I would imagine that the RF interference generated by having several of these in one box would be quite signifigant. PCI slots are only an inch or so apart on most motherboards, and without any sort of RF shielding between multiple cards, I can't imagine they'd function properly. It's a good idea on paper, but in reality, I'd think a few 1U rackmount servers would do the job much better. And for $499 a piece, you could get a decent single processor rackmount server for around the same price.

Re:Impractical (2)

man_ls (248470) | more than 12 years ago | (#2516209)

The way I understood it, the host system's motherboard was just a backplane supplying power to the computer, which was contained on the PCI card. IIRC, several years back, when Pentium IIs came out, lots of people wanted a way to upgrade their Pentium-I based systems. The easy answer was - make the motherboard into a holder for a very compact computer. It had, I think, a 333 Celeron, an SODIMM slot for memory, a single IDE channel and floppy controller, and onboard sound and video. Not too impressive, but the entire workings of the computer onto a single PCI card.

Sun or SGI also has something like this, to allow SparcStation users to run Windows applications natively. Basically, a card with a 450MHz Pentium II, some RAM, video (no sound though), and the other necessities of a computer.

I agree about the RF interference, however. I ran several computers, even in their shielded cases, in my room for a while, and it was a deadzone for our cordless phone. It would be only worse with inches, instead of feet, between the systems. Not all people have room for a rack to mount things on, however.

Re:Impractical (1, Informative)

Anonymous Coward | more than 12 years ago | (#2516221)

That's why they use ethernet for communications and just use the PCI bus for the power supply.

The PCI bus is just an outdated fancy parallel port.

Re:Impractical (0)

Anonymous Coward | more than 12 years ago | (#2516229)

PCI bus is only constrained to 133MB/S at 33Mhz/32bbit - not only can you widen the bus (as you mentioned) but the freq can also be increased to 66Mhz

Re:Impractical (0)

Anonymous Coward | more than 12 years ago | (#2516534)

Or, in the case of PCI-X, 133 MHz. Let's see, 133 MHz * 8 bytes (64 bits) = about 1 GByte/s + overhead.

Re:Impractical (0, Troll)

Sleuth (19262) | more than 12 years ago | (#2516236)

> -atrowe: Card-carrying Mensa member. I have
> no toleranse for stupidity.
>

Cool. Now that you've taken the Mensa test you can spend some time with a dictionary.

lol.

Re:Impractical (1)

bad-badtz-maru (119524) | more than 12 years ago | (#2516255)



I would suspect that the sig is a joke and the misspelling is the "punch line"...

maru
www.mp3.com/pixal

Re:Impractical (1)

Indomitus (578) | more than 12 years ago | (#2516346)

One would hope. You never know though. I like that you put quotes around "punch line" because it isn't much of one.

Re:Impractical (1)

Sleuth (19262) | more than 12 years ago | (#2516365)

Deliberate stupidity? At least we can assume [s]he's not ignorant, right? (I'm hoping here.)

Re:Impractical (2)

Peter Dyck (201979) | more than 12 years ago | (#2516249)

the throughput between the card and the host system is simply unacceptable.

I'd suspect a radar system requires much high a throughput than web or DB serving. Here's an example [transtech-dsp.com] of such a system. "160Mb/sec, 32 bit parallel synchronous interface" doesn't sound that high to me.

Re:Impractical (3, Interesting)

morcheeba (260908) | more than 12 years ago | (#2516275)

RF Interference:
I don't think there will be a problem with interference. Check out these computers. [skycomputers.com] They use a similar system, but instead of being on a pidly motherboard, they use the ubiquitous VME format. They really pack in the processors -- 4 G4 PPC's per daughter card [skycomputers.com], and 4 daughter cards per single 9U VME card, and then 16 9U cards per chassis, and then three chassis. (4*4*16*3=48 TFLOPS) The pitch spacing on PCI is comprable to that on VME.

Also, I wondered about the connector on the tops of these boards. It looks like another PCI card edge. I wonder if this is a duplicate of the host PCI interface (for debug purposes), if it's a new "slot" to connect to the server's internal bus, or if it's a way to connect server cards bypassing the main PCI bus (for better performance).

Re:Impractical (4, Informative)

Knobby (71829) | more than 12 years ago | (#2516428)

don't see these things taking off for most uses because the PCI bus is limited to a measly 133 MB/S. Even newer 64 bit PCI slots found in some servers have insignifigant bandwidth to keep the data flowing fast enough to make full use of these things.

You've heard of Beowolf clusters, right?

Let's imagine I'm running some large routine to model some physical phenomena.. Depending on the problem, it is often possible to split the computational domain into small chunks and then pass only the elements along the interfaces between nodes.. So, how does that impact this discussion? Well, let's assume I can break up an NxM grid onto four subdomains. The communication from each node will consist of N+M elements (not NxM).. Now, let's take a look at our options. I can either purchase 4 machines with gigabit (~1000Mb/s) ethernet, Myranet (~200Mb/s) cards, or maybe I can use ip-over-firewire (~400Mb/s) to communicate between machines.. Gigabit ethernet has some latency problems that are answered by Myranet, but if we just look at the bandwidth issue, then ~1000Mb/s is roughly 125MB/s. That's slower than the 133MB/s you quoted above for a 32bit, 33MHz PCI bus.. Of course there are motherboards out there that support 64bit, 66MHz PCI cards (such as these from TotalImpact [totalimpact.com])..

You're right that the PCI bus is not as fast as the data io approaches use by IBM, Sun, SGI, etc to feed their processors. BUT, if I'm deciding between one machine sitting in the corner crunching numbers, or 4 machines sitting in the corner talking slowly to each other through an expensive gigabit ethernet switch, guess which system I'm going to look at?

not that new (0)

Anonymous Coward | more than 12 years ago | (#2516214)

thats how quite a few of the "old" HPs and other servers worked... just slotted in extra CPU "cards" into the mainboard... sorta same idea as adding memory to routers, etc...

However, surely using something like a PCI slot would kill the data rate drastically... Maybe we may see the same thing that happened to ISA ports happening to PCI as AGP gains new uses... Who knows...

You don't understand... (1)

Ron Harwood (136613) | more than 12 years ago | (#2516235)

This doesn't increase the speed of your existing computer... it adds another computer on-board...

Re:You don't understand... (2)

krow (129804) | more than 12 years ago | (#2516519)

Right. And the cool part (which makes them a bit different then the typical solution) is the loopback ethernet on them.

How original (0)

Anonymous Coward | more than 12 years ago | (#2516220)

"Could something like this be the future of computing where for additional processing power you just kept adding additional computers inside of a host?"

Yeah - because Sun never thought of that. Can you say S/390?

CPU Speed (1)

CBoy (129544) | more than 12 years ago | (#2516227)

I read through the site and I could not find ANYTHING vs. relative x86 cpu speed. Anyone find anything? Sure it's great to have a PC, but at least give us some hint of how it performs compared to an x86 cpu.

Re:CPU Speed (0, Interesting)

Anonymous Coward | more than 12 years ago | (#2516283)

It is a x86 CPU, a NS Geode which I believe is based on a 486. I got all excited until I realised this thing has practically no use as it does n't have enough grunt. Even if you put 6 cards into one machine and each card had 266MHz thats still only 1596MHz and remember the IPC of a 486 is nowhere near the IPC of an Athlon/P3 or any other modern processor.

Linux on PCI cards is the way forward. (1)

Anton Anatopopov (529711) | more than 12 years ago | (#2516228)

Perhaps this is the way to get over the anti Linux brigade when they say 'Linux is difficult to install'.

Just hand them a PCI card and let them get on with it. I can't help thinking it would be better on a USB device though. Then you wouldn't even need to open the case !

Re:Linux on PCI cards is the way forward. (2, Interesting)

Ron Harwood (136613) | more than 12 years ago | (#2516240)

Actually, in my LUG we've given the newbies an eval copy of VMware, and a pre-installed linux image... let's them play for a month before they have to think about installing.

Re:Linux on PCI cards is the way forward. (1)

Anton Anatopopov (529711) | more than 12 years ago | (#2516269)

VMware is another way, but its a bit expensive. I would rather spend my $300 and get some hardware to show for it and effectively 2PCs, than spend it on vmware because it will run slower.

I did have a copy of VMware which I paid for, but I lost interest when they went all 'enterprise' on it and the prices got stupid.

Still, theres always plex86, but I want to run it under Windows ME :-(

Re:Linux on PCI cards is the way forward. (0)

Anonymous Coward | more than 12 years ago | (#2517085)

Just hand them a PCI card and let them get on with it. I can't help thinking it would be better on a USB device though. Then you wouldn't even need to open the case !

Just use a Knoppix CD.

Isn't that the course we've been on? (4, Funny)

JAZ (13084) | more than 12 years ago | (#2516247)

Follow me here:
A computer used to take up a room.
Then, computers were large cabinets in a computer room.
Now, they are boxes in a computer cabinet in a computer room.
So we can extrapolate the next step for computers is to be cardss in computer box in a computer cabinet in a computer room.

It's a natural (obvious) progression really.

Re:Isn't that the course we've been on? (0)

Anonymous Coward | more than 12 years ago | (#2516490)

Yup. Indeedy. The step after that is:

Chips on a card in a box in a cabinet in a room.

And when I say chip, I mean a chip literally... this is probably at elast 10 years away though :)

Not that News (0)

Anonymous Coward | more than 12 years ago | (#2516262)

I worked at Commodore on the Amiga Bridgeboard over 10 years ago, and they had a whole Crawl^h^H^H^H^HBlazingly fast 80286 on the board.
This approach is decidedly not new, the real problem is they don't adverise the speed limit of the processor (if they have a GHz or faster processor on those boards I'll be VERY impressed), nore do they indicate if there is an upgrade path.

Still this is kind of cool, if the boards can be broght up to normal processor speeds, you could make a small (relatively) cheap and fast 6 or 8 way SMP node if you had available slots.

future? yes, but it's here today... (1)

mysticbob (21980) | more than 12 years ago | (#2516267)

sgi has been doing this for a long time. their
newest systems are almost this exactly, but instead of slow, thin pci, they use large, fast
interconnects:

http://www.sgi.com/origin/300/

Wait! What about Beowulf? (1)

friedmud (512466) | more than 12 years ago | (#2516271)

He didn't even try to do any parrallel processing with these things! That was the first thought that came into my head.

Here we have four or five cpus all in one machine, talking to eachother over a native PCI bus. It seems to me this would be a great way to run a Beowulf cluster In a machine.

Anyone care to comment on why he might not have done this?

If you do have storage issues... (1)

pvera (250260) | more than 12 years ago | (#2516289)

Commercial rent is expensive, so the least space you need to dedicate out of your office to store servers the more cost effective they are.

These cards have been around for ages with various degrees of complexity. There used to be (don't know if they are still around) some of these cards that were designed to plug into a Mac so the card would do all the hard work if you wanted to emulate a PC.

I don't see the value for the home user. I can't see why a true home user (not the very small percenteage of hardcore enthusiasts or people that run a business from home) would need so much power that the solution is to get a box, plug a few of these babies and cluster them.

Still, its not so hard to come up with a home scenario:

1. Send your broadband connection to the basement of your house and spread it to all the rooms in the house with a $80 broadband router, cheap switches and hubs.

2. Put a box in a closet in the basement with different PCI cards to serve a specific purpose. For my own personal needs (I am a Microsoft dot whore, sorry) I would have an exchange server, one dedicated as a network file server, a sql server and a IIS server. A person of the Unix persuasion would have a card with sendmail and some kind of pop server, a file server, mysql or posgres and Apache.

With just a little bit of money the house now packs as much punch inside of that box in a basement closet than what it takes my company to do with a row of bulky servers. Add in a blackbox switch and a cheap 14-in monitor, keyboard and mouse and you are set. Of course Unix people would use some kind of secure shell and save themselves the trip to the basement, and us lazy Microsoft whores will just have to rely on Terminal Services or pcAnywhere.

In a corporate environment the space saving actually pays off (you don't pay your apartment rent or home mortgage by the square foot like most businesses do) as soon as you recover some of the space wasted by the server room. Right now I can see how I could take ours, gut it out, put a couple boxes full of these PCI cards in a good closet with the proper ventilation, and then turn the old equipment room into a telecommuter's lounge.

The home solution would rock because my wife will not bother me anymore about all those weird boxes sitting under my desk in my home office. All the clutter goes away and I just keep my tower case.

Geode? (2)

tcc (140386) | more than 12 years ago | (#2516295)

266mhz max. Their target audience is the firewall/network application.

Too bad a Dual Athlon-based solution (on a full length PCI card) would suck too much juice... at least from the current PCI specs... AMD needs to make a move like intel did with their Low wattage PIII, I'd love to see a 12 processor (5 pci slots plus host) renderfarm in a single box for a decent price. Not only it would be space saving, but imagine that in a plexi-glass case :) a geek's dream.

Re:Geode? (0)

Anonymous Coward | more than 12 years ago | (#2516301)

You could always supply the additional power via a powersupply connector. I'm pretty sure there was some monster 3DFX graphics card that needed additional power and got it this way.

Wait just a minute... (1)

Zenjive (247697) | more than 12 years ago | (#2516298)

this better not be another Krasnoconv with that hoax SETI-accellerator card!
I don't know if I can take another disappointment like that.

P3 on a PCI card (0)

Anonymous Coward | more than 12 years ago | (#2516320)

Advantech [advantech.com] carries similar "PC on a PCI" products, much of which are much more powerful (P3 1Ghz anyone?) than the one referenced in the article.

Imagine a new kind of bus (2, Insightful)

Anonymous Coward | more than 12 years ago | (#2516323)

Imagine if all the devices in your computer were attached to each other with 100 GB optical cable.

Essentially there would be a switch that allowed about 32 devices to be attached.

The devices could be storage devices, processors, audio/video devices, or communication devices.

Storage devices would be things like memory, hard drives, cdroms and the like.

This bus would allow multiple processors to access the same device at the same time and would allow devices to communicate directly to each other, like allowing a program to be loaded directly from a hard drive into memory, or from a video capture device directly onto a hard drive.

No motherboard, just slots that held different form factor devices with power and optical wires attached.

A networking device would allow the internal protocol to be wrapped in IP and allow the interntal network to be bridged onto ethernet. This would allow the busses on seperate computers to work like a single computer. The processors on all the machines could easily network together, memory could be shared seamlessly, harddrive storage would be shared and kept backedup in real time. Any device in any machine could communicate directly with any other device in any other machine. Security allowing.

Want 20 processors in your machine? Install them.

Want 6 memory devices with 1GB each? Add them.

Want 100 desktop devices with only a network device, display device and input/output device that use the processor and storage out of an application server? No problem.

Want a box that seemlessly runs 20 different OSes each in a virtual machine that are ran across 10 boxes in a redundant failover system? No problem, it's all done in hardware.

Want the hard drives in all the desktop machines to act like one giant raid 5 to store all the companies data on? No problem. (1000 machines with 10 GB each is 10 TB of storage)

This is the future of computing.

Re:Imagine a new kind of bus (3, Interesting)

MikeFM (12491) | more than 12 years ago | (#2516474)

I think the basic form to use is some simplified base system designed to be upgraded to the extreme. No built-in crap on the motherboard to speak of.. just lots of PCI slots. If they could share harddrive and RAM and provide a keyboard/mouse/monitor switching method similar to KVM switches but all in one box it'd be great. So rather than replacing older computers we could just add to them. Maybe perfect something like MOSIX and drop the whole stupid SMP idea. I've always imagined computers would someday be like legos where you could buy a CPU lego, a RAM lego, a harddrive lego, etc and just plug them together in any order to add to a hot system. No reboot and no case to open. If one burned out just toss it and put a new one in.

Re:Imagine a new kind of bus (1)

bernz (181095) | more than 12 years ago | (#2516631)

it exists already, sweetie. check out the infiniband spec somewhere.

SUN has a similar product.. (5, Interesting)

Phizzy (56929) | more than 12 years ago | (#2516331)

I am actually typing this comment on a Sun Microsystems SUNPCI card.. It's a celeron, I beleive a 466mhz or so, w/ 128m of ram. It has onboard video if you want to use an external monitor or can use the sun's native card if you want to run it windowed, ditto w/ ethernet. I've been using the card for about 3 months now, and other than some instability w/ SunOS 2.6 (which dissapeared in 2.8), I haven't had problems with it.. you can copy/paste between the Sun window and the 'PC' window, which is very helpful.. and though we are running WIN2000 on it (ok.. so shoot me) I don't see any reason why you couldn't run linux on it if you really wanted too.. All-in-all, the card is pretty badass..

//Phizzy

Re:SUN has a similar product.. (1)

Peter Dyck (201979) | more than 12 years ago | (#2516379)

I agree.

I'm posting this with Konqueror on Sun Blade 100. Next to the Konq window I have a SunPCI window with W2K/Office2K. As nice as Sun's StarOffice it still doesn't import/export clients' office documents properly.

Re:SUN has a similar product.. (2)

Phizzy (56929) | more than 12 years ago | (#2516523)

yeah.. that, and StarOffice eats all your ram, not to mention your desktop when you run it. ;)

The test I've run of SunPCI has convinced our management to do away w/ separate NT/2000 systems when we move to a new building in april, and just outfit everyone w/ Ultra 5s, SunPCI cards and dual-head monitors..

//Phizzy

Re:SUN has a similar product.. (0)

Anonymous Coward | more than 12 years ago | (#2516498)

Apple had the Houdini cards, and Orang Micro had PC cards that were essentially 166 mhz pentium PCs that shared resources with the MAC. Prior to that, Radius had the Radius rocket that was a 68040 processor that was essentially a MAC within a MAC that could run in Parrallel to accelerate processor intensive tasks.

Re:SUN has a similar product.. (2)

11223 (201561) | more than 12 years ago | (#2517138)

Actually, Sun is making them now with 733 mhz Celeries in them.

Definitely an awesome product.

Already happening... (0)

Anonymous Coward | more than 12 years ago | (#2516333)

Most high-end unix servers are already just variations on this theme. Sun's E10k, for example, is just 16 briefcase sized E450's with a domain interconnect.

hmmm.. (1)

ZaneMcAuley (266747) | more than 12 years ago | (#2516363)

Brings back memories of Transputer cards :D

How does sharing of the disk between each machine on a card affect the performance ?

Audio Apps -- Digidesigns DSP Farms (2)

namespan (225296) | more than 12 years ago | (#2516396)

I haven't been paying attention to the market... I guess things like this aren't all that rare. Apparently there's a G4 PPC computer-on-a-card as well.

But anyway, it reminds me a quite a bit of what Avid/Digidesign do for their high-end systems.
You see people who've got 6 slot PCI systems and 4 of those slots are filled with extra computing cards (sometimes more... some people get expansion chasis'). You can rely on your computers processor if you're not doing to many complex effects on a track of audio, but at some point (not too hard to reach... throw in a tube amp emulator and a reverb) you run out of CPU. So they have PCI cards which have a couple of DSP chips (Motorola 56xxx series, I think) on them, and the more of these you add, the more audio processing you can do simultaneously.

At some point, perhaps people will think: hey, why add a specialized card? Why not just more general purpose computing power?

Re:Audio Apps -- Digidesigns DSP Farms (2)

RFC959 (121594) | more than 12 years ago | (#2516531)

At some point, perhaps people will think: hey, why add a specialized card? Why not just more general purpose computing power?
Thank you for contributing to the turning of the Wheel of Reincarnation [tuxedo.org].

Supporting other Host OSses? (0)

Anonymous Coward | more than 12 years ago | (#2516423)

I wonder if this Ethernet simulation and drive sharing is supported for other host operating systems, so that you can have a Linux box-on-a-board in a computer running something evil, like Windows? (Just in case you need to run mainly Windows on your production box since your boss requires you to, but you want a Linux ghost-in-the-machine for fun and/or increased productivity... And - yes! - I read in the review the card itself can run Winoze, but that*s not my point...

Thx

teq

Re:Supporting other Host OSses? (0)

Anonymous Coward | more than 12 years ago | (#2516957)

the site says you can run linux, bsd and windows all in the same box

No serial connection? (0)

Anonymous Coward | more than 12 years ago | (#2516431)

I don't mean a USB, but I would have liked to see the option to have a serial connection so to have a tty with which read the system console directly, so to check the system status without having to leave a telnet/ssh service on...

my problem is price (1)

Eugene (6671) | more than 12 years ago | (#2516451)

I'm fairly interested in those devices, but right now the the cost for those boards is not cheap enough for me to get it. at ~$500 a pop, I could put together a cheap system with better specs (not on a board, of course). I know it's targeted for server/commercial applications, but if they are willing to lower the price some, I'm sure there'll be a lot of takers.

my idea setup will be using a CF card with CF-IDE adapter as the boot drive(which eliminate the dependancy of the host OS on powerup and no actual HD required).

uhnnn,yeah, thats it... (0)

Anonymous Coward | more than 12 years ago | (#2516476)

doesnt SUN already make this? machine of the future indeed....

up tp 104 processors in a box, add as you need...

even HP/paq has pa-risc based machines that are built like utility payments....four (or x) processors built in, only three turned on...ooops need more, crank up the meter...activate another processpr at a higher payment level...

how far back..? (0)

Anonymous Coward | more than 12 years ago | (#2516478)

I remember my Apple IIe had a Microsoft card with a zilog processor so the Apple could run CP/M. Nice mix, eh? So what is the earliest version of this sort of thing?

G4 PCI cpu (1)

starman97 (29863) | more than 12 years ago | (#2516515)

Here's a G4 card that plugs into a PC or anything with a PCI slot for $400
http://www.sonnettech.com/product/crescendo_7200 .h tml#pricing

The Catch: You have to write the device driver for the Motorola MPC107 PCI bridge chip.

Switched Bus, Multipurpose cards (2)

swb (14022) | more than 12 years ago | (#2516535)

I'd like to see a bus that was little more than a switch, with a minimum of logic for management.

For cards, it'd be great if each card had its own CPU and RAM. Ideally the cards would have a few universal connectors, each of which could accomodate an I/O module which would contain just the discrete electronics necessary to drive a specific device or medium (eg, video, audio, disk, network).

Bus-Switch modules would be interconnectable to accomodate more cards, and would have switch-like manegement features for segmentation, isolation and failover type features.

The CPU cards themselves ought to be less complicated than motherboards since there's no bus logic, just interconnect logic to the Switch-Bus and the I/O modules, and RAM.

Since each board has its own RAM and CPU it ought improve system performance because the O/S could offload much more processing to CPU boards dedicated to specific tasks. Instead of the kernal bothering with lower-level filesystem tasks and driving the hardware, a "driver" for filesystem and devices could be loaded on a CPU board dedicated to I/O.

The same could be true of user interfaces -- run the the UI on the board dedicated to video, audio and USB. The kernal could run applications or other jobs on the "processing" CPU board(s).

Networking? Offload the entire IP stack to the networking CPU board.

Re:Switched Bus, Multipurpose cards (1)

dy_dx (131159) | more than 12 years ago | (#2516684)

it doesn't seem like this would work so well...the PCI bus would get very hosed with I/O requests from the RAM on each chip. Instead of limiting the hosage of physical memory access to the IDE or SCSI bus, you take the PCI bus down with you too (making the bandwidth of the PCI bus limit scalability of how many procs you can use).

also, having multiple memories accessing the same data in a distributed program adds plenty of overhead to make sure all the memories maintain validity and access control of the data. thus the chips wouldn't be as simple as CPU, RAM, interface.

Could be cool for games (0)

Anonymous Coward | more than 12 years ago | (#2516550)

Hm.

Over the past few years, more and more CPU
time in games has been allocated to AI
processing. We've got video and audio
hardware cards to offload the significant
demands of those tasks, but AI would seem to
need a more flexible solution than a hw
implementation of 'DirectAI' or somesuch.
One of these might work well as a programmable
AI subproc. That, or perhaps an engine could
be tweaked to offload physics calculations to
one o' these puppies. Of course, keeping
state information consistent would be a major
issue, so the whole undertaking would be
subject to a overhead/payoff analysis.

i dunno about these (1)

mikus (222126) | more than 12 years ago | (#2516560)

Seems pci bus would be a horrific limiting factor considering you now have a series of processors sccessing resources across the pci bus, as well as on-board. As a firewall, unless the nic's are something decent that can handle hardware trunking to a real switch, 1 interface isn't going to take you far. If they could do dot1q trunking, it would definately be nice. Beyond that, I can't see something commercial like checkpoint fw1 running efficiently on the daughter cards with no more than 266mhz and 256mb of ram. Might work ok for something such as ipf, pf, or ip chains on a stripped down linux kernel, but no freakin way no on win2k. I say stick to a cluster of 1u's if space is that big of an issue. I doubt it's any less complicated to set up a cluster 1u's than getting these things to work *correctly*.

Read the article... (2)

YuppieScum (1096) | more than 12 years ago | (#2516877)

These cards have TWO NICs, one that talks across the PCI bus and one physical RJ45 10/100...

I'm only go to say this once, but I could copy/paste the same response to 20 or 30 posts on here...

Re:i dunno about these (0)

Anonymous Coward | more than 12 years ago | (#2516920)

It has a 300 Mhz and up to 512 RAM and will run Checkpoint FW-1 . It has 2 MAC's , 1 10/100 external , and 1 internal GB on the PCI bus.

ClearCube has another one (1)

studboy (64792) | more than 12 years ago | (#2516601)

(disclaimer: my bro's wife usta work for this company)

rackmounted PCs with video, etc. They're intended for offices: you run cables to each person's monitor/keyboard/mouse, manage all the actual hardware in one place ~~ ClearCube [clearcube.com]

Linux on PCI a year old, dudes! (2, Insightful)

scaryjohn (120394) | more than 12 years ago | (#2516615)

I looked at this and said... wait a minute, hasn't this already been sorta done [slashdot.org]? Despite not being a full featured box, Firecard [merilus.com] is a PCI-card running Linux... for the purposes of supporting a firewall (as you could have guessed from the name if you'd not read the story -- Nov 14 2001... but it's cool that they've taken it to the next level.

Nothing particularly interesting about this (0)

Anonymous Coward | more than 12 years ago | (#2516622)

How is this anything interesting at all?

You just have a bunch of PCs talking IP over a bus. Who gives a shit. There's nothing that makes it any harder to talk IP over PCI versus Ethernet.

Other companies, especially those in the CompactPCI space, have been doing this for years.

Typical of Slashdot readers to jump up and start screaming any time anyone does anything.

The voices of the dead cry out for vengence in the form of bowls of hot grits.

Radius Rocket (1)

beerits (87148) | more than 12 years ago | (#2516711)

This sounds like the old radius rocket for mac. Each rocket was essentiallly a quadra on a nubus card.
http://lowendmac.com/radius/rocket.shtml

Single card servers (1)

Capacitor (241918) | more than 12 years ago | (#2516755)

Those of you wondering about expandable servers should have a look at Transmeta's homepage (http://www.transmeta.com)- among their featured products is a server using something called Serverblade - single board servers. I think you can fit 24 of them in a 3u rack enclosure.

I want some ! (1)

jooniqzb1tch (246498) | more than 12 years ago | (#2516769)

This is really a great system ! of course it's a bit expensive but still, having two more linux boxes inside your system to play around with, without the extra cases/cables/noise sounds awesome. it would also rock to host several small servers .. I'll definitely try to get some.

This idea is actually the standard (1)

clustermonkey (320537) | more than 12 years ago | (#2516847)

In the communications industry, this isn't anything new. The idea of having multiple computers on blades in a chassis is a (the) standard called 'PICMG', based on CompactPCI technology. It's been around a long while. I validate systems like this at work.

You can get 4, 8, 16, or even 24 SBCs (Single Blade Computers) in a chassis, and link these chassis together via switches. Each chassis has a switch that links all the SBCs in the backplane together and has external ports to hook it up to the outside world.

Check this out:
http://www.picmg.org/compactpci.stm
and this:
http://www.intel.com/network/csp/products/cpci_i nd ex.htm

Haven't Sun been doing this for a while? (1)

parryr (67836) | more than 12 years ago | (#2516956)

Sun Microsystems have had "PC cards" for a while now. There was a whitepaper they published some time ago on using a small Sun server (say, an E450) and populating it with PC cards.

They demonstrated how an entire Windows NT cluster could be built using this technology, chucked in some Terminal Services under Windows, ran Exchange, and then did all the important stuff (mail, DNS, whatever) on the Sun box itself.

Granted, it's not Linux, and granted, he cost of a Sun box is quite high - but the PC cards are significantly cheaper over here for Sun hardware, and Sun architecture seems to be a bit more robust and scalable than PC stuff.

who cares? (2)

adolf (21054) | more than 12 years ago | (#2517072)

Stuff like this started in the PC world, IIRC, with 386s on 16-bit ISA cards.

Nobody cared then.

Why would anyone care now?

Please explain your point using no more than 100 words.

-

The Briq (0)

Anonymous Coward | more than 12 years ago | (#2517077)

Something similar to this has been down, though it uses a 5 1/4 inch bay instead of a PCI slot. It called the Briq, it's a self contained PPC based machine created by Yellow Dog. (http://www.yellowdoglinux.com) It was engineerd to run Yellow Dog..so no, it's not a Mac, though you can run MaconLinux.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...