Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

PCI SIG Releases PCIe 2.0

kdawson posted more than 7 years ago

Upgrades 113

symbolset notes that The Register is reporting that PCI SIG has released version 2.0 of the PCI Express base specification: "The new release doubles the signaling rate from 2.5Gbps to 5Gbps. The upshot: a x16 connector can transfer data at up to around 16GBps." The PCI-SIG release also says that the electromechanical specification is due to be released shortly.

cancel ×

113 comments

Sorry! There are no comments related to the filter you selected.

Yay (4, Funny)

Anonymous Coward | more than 7 years ago | (#17625610)

Now I can play games at 600fps- I've so been needing the boost- 200fps just doesn't cut it.

But seriously- the data acquisition and video rendering markets should benefit from this. Cool.

Re:Yay (0)

Anonymous Coward | more than 7 years ago | (#17625716)

You can start laughing at all those idiots who spent $500 on a proprietary NVidia SLI rig to find out it's now completely obsolete.

Re:Yay (2, Funny)

killa62 (828317) | more than 7 years ago | (#17625814)

yes, but will it run duke nukem forever?

Re:Yay (2, Funny)

Anonymous Coward | more than 7 years ago | (#17625836)

Well, this [jjman.org] card might be up to the task.

Re:Yay (1)

Anal Cock (1016533) | more than 7 years ago | (#17625912)

Shame about the fillrate and lack of unified shaders :( Quake 7 is so going to rape it.

Re:Yay (0)

Anonymous Coward | more than 7 years ago | (#17626946)

Pah, I'll wait for the new Bitboys card to be released in 2039.

Re:Yay (1)

RzUpAnmsCwrds (262647) | more than 7 years ago | (#17627038)

No, you need an Infinium Labs Phantom to do that. It's a launch title.

Re:Yay (1)

jozeph78 (895503) | more than 7 years ago | (#17629592)

yes, but will it run duke nukem forever?


Yes, provided no ill side affects from the vapor.

Re:Yay (1)

HTH NE1 (675604) | more than 7 years ago | (#17630110)

It will, though the game will have to be delayed a bit more so that it can be rewritten to take advantage of this advance... and the next one... and the next one... and....

Re:Yay (1)

operagost (62405) | more than 7 years ago | (#17630676)

Not forever, but for at least a few minutes before it crashes.

Re:Yay (1)

KillzoneNET (958068) | more than 7 years ago | (#17626138)

I think you mean about another 5-10 fps in Oblivion and Neverwinter Nights 2... Now that is an upgrade!

Re:Yay (1)

itlurksbeneath (952654) | more than 7 years ago | (#17627442)

I never understood the need for 180fps in a game. Anything higher than the monitors refresh rate is a waste. I mean, if you're running an LCD monitor (typically refreshes 60 times per second) and you've got a video card that's rendering 120 frames per second - where's that every-other-frame going? The monitor can only display 60/s.

Now, if they offloaded the physics to the GPU to use the extra capacity...

Re:Yay (3, Interesting)

jandrese (485) | more than 7 years ago | (#17627922)

The thing about 120FPS is that when someone is quoting you that number, they're talking about the framerate when you're sitting in the entrance room after first booting up the game. In complex environments where you have lots of monsters and particle effects on the screen, that number can quickly drop down into the 30-60 range. While this is still more than playable, if you'd only started at 60 or 45FPS the game would bog down in the difficult sections (and those sections are typically where you need that extra accuracy and quicker response time).

Re:Yay (1)

afidel (530433) | more than 7 years ago | (#17631968)

Which is why you need MINIMUM framerate numbers, not average. In NWNW2 I get around 40fps with fairly minimal settings, but in difficult scenes I bog down as low as 8fps and average around 11fps, to me that is unplayable.

Re:Yay (1)

PrescriptionWarning (932687) | more than 7 years ago | (#17627904)

Don't worry, the Unreal 3 Engine running at 2560x1600 can take full advantage of this stuff... just wait until Vanguard: Saga of Heroes goes Unreal 3 :)

Outstanding (1, Funny)

cmdtower (1051808) | more than 7 years ago | (#17625620)

After these many years of reading slashdot.org (ala 1998) - This is my first post. Boy am I glad I'm waiting for the nextgen motherboard. 16 GB/s. Yikes!

Re:Outstanding (1)

DemoFish (1051816) | more than 7 years ago | (#17625932)

After 5 minutes of being registered I'm already testing the new discussion system. PCIe 2.0 ftw!

Re:Outstanding (1)

GFree (853379) | more than 7 years ago | (#17626042)

Make sure to provide at least one pro-Linux comment during your time here, or you will be forever considered a pariah of the Slashdot community.

Extra karma for one good "Soviet Russia" joke.

Re:Outstanding (1)

PSXer (854386) | more than 7 years ago | (#17626190)

It's easier to just put it in your sig. That way you won't forget.

Re:Outstanding (1)

pipatron (966506) | more than 7 years ago | (#17626212)

In soviet russia, slashdot comments YOU!!

oh wait... a good Soviet Russia joke you say? I'll be back...

Re:Outstanding (1)

geminidomino (614729) | more than 7 years ago | (#17627164)

In soviet russia, slashdot comments YOU!!

oh wait... a good Soviet Russia joke you say? I'll be back...

No, no you won't.

Graphics get easier and easier to render (3, Insightful)

scwizard (941758) | more than 7 years ago | (#17625646)

Slower than they get easier to create.

Re:Graphics get easier and easier to render (1, Funny)

Anonymous Coward | more than 7 years ago | (#17625676)

I'd rather put a network card in that 16 Gbps slot. Imagine how fast one could download porn!

Re:Graphics get easier and easier to render (0)

Anonymous Coward | more than 7 years ago | (#17625918)

I download porn so I dont have to imagine things, you insensitive clod!

Re:Graphics get easier and easier to render (1)

bky1701 (979071) | more than 7 years ago | (#17626998)

Not really. As systems become more capable, the need to optimize (a major part of modeling) starts to go down. Also, major improvements can be done to things like lighting - that do not take a whole lot of work, that make the whole model look better. Real-time ray-traced lighting? I wish, but hopefully it's not going to be that much longer.

Confusing article texts... (4, Informative)

RuBLed (995686) | more than 7 years ago | (#17625694)

The signalling rates are measured in GT/s not Gbps (correct me if I'm wrong). The new release doubles the current 2.5 GT/s to 5 GT/s. As a comparison, the 2.5 GT/s is about 500 MB/s bandwith per lane thus 16 GB/s in a 32 lane configuration.

I tried to do the math but I just can't get it right with Gbps instead of GT/s.

http://www.intel.com/technology/itj/2005/volume09i ssue01/art02_pcix_mobile/p01_abstract.htm [intel.com]

Re:Confusing article texts... (4, Informative)

Kjella (173770) | more than 7 years ago | (#17625738)

It's 2.5 and 5.0Gbps, but with a 10 bits to encode 1 byte (8 bits), so net 250MB/s to 500MB/s, which works out to 16GB/s in a 32-lane config. "The upshot: a x16 connector can transfer data at up to around 16GBps." in the article is simply wrong.

Re:Confusing article texts... (0)

Anonymous Coward | more than 7 years ago | (#17625788)

I have no idea why they'd do it, but they're probably adding the upstream and downstream bandwidth to get the 16GB/s figure.

Re:Confusing article texts... (2, Insightful)

SQL Error (16383) | more than 7 years ago | (#17625998)

I have no idea why they'd do it, but they're probably adding the upstream and downstream bandwidth to get the 16GB/s figure.
Because it gives a bigger number.

Re:Confusing article texts... (2, Informative)

grindcorefan (959282) | more than 7 years ago | (#17626308)

Well, one 16x link can transmit 8 Gibibytes per second. As PCIe is full-duplex, stupid salesdroids and marketingdwarves can be expected to simply add both directions together and use that figure instead. But I agree with you, it is misleading.

PCIe 1.0 does 2,500,000 Transfers per second per lane in each direction. Each transfer transmits one bit of data.
It uses a 8B/10B encoding, therefore you need 10 transfers in order to transmit 8 bits of payload data.
Disregarding further protocol overhead, the best rate you can get is 250,000,000 bytes of payload data per seconds per lane.
16 * 250 * 10^6 = 4 * 10^9 = 4 Gibibytes/s on a 16x link in each direction

with PCIe 2.0 the data rate doubles, therefore the max transfer rate per direction is 8 Gibibytes per second on a 16x link in each direction when you disregard protocol overhead.

Re:Confusing article texts... (0)

Anonymous Coward | more than 7 years ago | (#17630320)

Argh, not only are you using those funky metric units, but you're using them wrong!

GT/s ? (1)

HTH NE1 (675604) | more than 7 years ago | (#17630608)

The signalling rates are measured in GT/s not Gbps (correct me if I'm wrong).

I'd have to know what a GT/s was first. Gross Tonnes per second? Gigatonnes per second? Gigatexels per second? Gran Turismos per second?

Re:GT/s ? (1)

onemorechip (816444) | more than 7 years ago | (#17631616)

Gigatransfers/second. I think somebody on the committee thought Gigabits/second was underselling the bandwidth, since it only represents the bandwidth of one lane in a multilane link. So a "transfer" is the number of bits that can be moved simultaneously over all the lanes in a UI (unit interval, which is 0.2 ns at the new rate); consequently the size of a transfer will vary with the link width.

This will increase my productivity! (1)

haakondahl (893488) | more than 7 years ago | (#17625700)

Now I can type at 16GBpS! Imagine how fat and poorly-designed our operating systems can be now! Th1s r0cks[shift one]

mo3 do3n (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17625708)

starteD work on [goat.cx]

Sigh (3, Interesting)

Umbral Blot (737704) | more than 7 years ago | (#17625746)

I know this is news, and actually relevant to /. (for once), but I find it hard to care. Sure the specification is out, but it will take a long time I suspect to find its way into computers (since the existing version is so entrenched), and even longer for cards to be made that take full advantage of it. Is there something I am missing that will make this new standard magically find its way into computers in the next few months? Do I have to turn in my geek card now?

Re:Sigh (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17625846)

It took long enough for the first version of PCIe to start displacing PCMCIA. At least now the slot's the correct size... It might be comparable to going from PS/2 to USB 1.x to USB 2.0.

Re:Sigh (5, Insightful)

Indy1 (99447) | more than 7 years ago | (#17625858)

its a good thing. By getting the standard approved way before its needed, it gives everyone (hardware oem's, os developers, etc) plenty of time to integrate support for it. Rushing critical standards like this leads to nasty problems (think VLB) or outright non adoption.

When the ATA standards 33, 66, 100, etc, were adopted, everyone was saying the same thing - why in the hell is it needed. But by getting it adopted and published before it was needed, it gave all the chipset and motherboard vendors time to build it in their products. Result - in the past 10 years hard drives have NOT been bottlenecked transferring data between the drive and motherboard. You can get a screaming fast hard drive, stick it in an older motherboard (say with in 2-3 years of the hard drive's date), and it almost always works without issues.

Pci-e 1.0 took too long to come out. The Pci bus has been overwhelmed by modern video cards (which led to the AGP hack, which fortunately worked fairly well), scsi and raid controllers, ethernet cards (pci cant even give a single gig nic enough bandwidth), usb 2.0, firewire 400 and 800, etc etc etc. Pci-X was complex, expensive, and not widely available. It also ate up too much of the motherboard real estate.

By getting on the ball with Pci-e 2.0, we won't see the same problem again for a while. Now only if firewire 800 and e-sata could be more common........

Why 'PCI'? (1)

Ed Avis (5917) | more than 7 years ago | (#17626338)

Can someone explain why it's called 'PCI' Express when it doesn't have much to do with PCI, and the slots aren't backwards compatible? Is it a variant of the marketing rule that any successful network transport must be called Ethernet?

Re:Why 'PCI'? (1)

vadim_t (324782) | more than 7 years ago | (#17626474)

It's backwards compatible to the OS.

Any OS that supported PCI supports PCI automatically supports PCI Express without any modifications.

Re:Why 'PCI'? (1)

scumbaguk (918201) | more than 7 years ago | (#17627084)

Because PCI stands for "Peripheral Component Interconnect" PCI Express still connectes peripheral components to the computer.

So the name makes perfect sense dosn't it.

Re:Why 'PCI'? (5, Informative)

Andy Dodd (701) | more than 7 years ago | (#17627334)

It has more to do with PCI than you think.

While the electrical interface has changed significantly, the basics of the protocol have not changed much at all, at least at a certain layer.

The end result is that at some layer of abstraction, a PCI-Express system appears identical to a PCI system to the operating system (as another poster mentioned). BTW, with a few small exceptions (such as the GART), AGP was the same way. Also, (in theory) the migration path from PCI to PCI Express for a peripheral vendor is simple - A PCI chipset can be interfaced with a PCI Express bus with some "one size fits all" glue logic, although of course that peripheral will suffer a bandwidth penalty compared to being native PCIe.

Kind of similar to PATA vs. SATA - Vastly different signaling schemes, but with enough protocol similarities that most initial SATA implementations involved PATA-to-SATA bridges.

Re:Why 'PCI'? (1)

Micah (278) | more than 7 years ago | (#17628686)

Hi,

since you seen to know what you're talking about ... :)

I've been trying to figure out if an ExpressCard eSATA interface would work with Linux and, if so, with just the in-kernel SATA driver or would it require something additional?

Since ExpressCard supposedly uses PCIe internally, my strong hunch is that it would work just fine with the in-tree driver, but even extensive Googling did not come up with anyone who had actually tried it.

Just trying to find some confirmation before plunking down hundreds of dollars for a multi-hundred-gigabyte lightning fast external laptop drive subsystem. :)

Re:Why 'PCI'? (1)

onemorechip (816444) | more than 7 years ago | (#17631768)

Several people have answered this already, but I'll just add a few small points. Configuration space in PCI Express is a superset of PCI's configuration space. So a BIOS or OS that can talk to config registers on a PCI device can talk to config registers on a PCIe device. Also, the transaction layer mode (split transactions) is based on the PCI-X (not conventional PCI) model. You can think of the relationship between PCI and PCIe as similar to that between parallel SCSI and the serial interfaces that implement the SCSI command sets (SAS, iSCSI, and FC).

Re:Sigh (1)

virtual_mps (62997) | more than 7 years ago | (#17627572)

The Pci bus has been overwhelmed by modern video cards (which led to the AGP hack, which fortunately worked fairly well), scsi and raid controllers, ethernet cards (pci cant even give a single gig nic enough bandwidth), usb 2.0, firewire 400 and 800, etc etc etc. Pci-X was complex, expensive, and not widely available. It also ate up too much of the motherboard real estate.

PCI-X is PCI, just clocked a little higher, and it is fairly common on server hardware where the increased bandwidth is actually useful. Actually, one of the reasons that PCI-E has taken so long to get adopted is that there aren't that many things that need it. On the desktop side, only graphics do (and that's what's really pushing adoption now). On the server side (where you'll find 64 bit slots) PCI hasn't been the bottleneck for any of the things you mention--especially since decent servers have multiple busses. 1GBE is only 125MB/s, which is far less than even the practical limits for PCI. USB2 is 60MB/s and FW800 is only 100MB/s. PCI's base bandwidth is 133MB/s, 66MHz PCI is 266MB/s, 66MHz/64bit PCI is 533MB/s, etc.--the stuff you named isn't bottlenecked on PCI (and is often on a dedicated channel integrated into the motherboard chipset and isn't even competing with add-in cards). 10GBE is where PCI really starts to hit the wall, as well as some of the newest RAID controllers (that have multiple fast disk channels on a single card).

Re:Sigh (1)

Indy1 (99447) | more than 7 years ago | (#17629294)

you forget one thing. PCI is a shared bus. So you end up having usb 2, ethernet, etc, all fighting for that same bandwidth. Also, the 133MB/s is peak bandwidth. In reality, the sustained bandwidth is quite a bit less, depending on how well the southbridge chipset is designed (early VIA chipsets often had problems here).

Pci-e gives each slot dedicated bandwidth, which is the biggest advantage of the technology.

Re:Sigh (1)

Joe The Dragon (967727) | more than 7 years ago | (#17629962)

There is a AM2 board with sli and pci-x

Re:Sigh (1)

operagost (62405) | more than 7 years ago | (#17630736)

When the ATA standards 33, 66, 100, etc, were adopted, everyone was saying the same thing - why in the hell is it needed.
Meanwhile, "everyone else" knew why it was needed and had been using SCSI for several years because of the performance advantages.

The Pci bus has been overwhelmed by modern video cards (which led to the AGP hack, which fortunately worked fairly well),
Probably because it wasn't a hack, but a well-documented and planned industry standard.

Re:Sigh (2, Insightful)

evanbd (210358) | more than 7 years ago | (#17625876)

Well, it's back-compatible with PCIe 1.0 and 1.1, so it aside from price there's on disadvantage to including it. Whether there's demand for it is another question, but I'm sure the graphics card makers will find something to do with it. Think back to AGP 2x/4x -- those made it onto cards and motherboards fairly quickly, iirc.

Re:Sigh (0)

Anonymous Coward | more than 7 years ago | (#17625894)

Is there something I am missing that will make this new standard magically find its way into computers in the next few months?
Backward compatibility?

One of the nice things about PCI Express is that it's a serial architecture, with negotiated speeds and lane configurations, ports rather than buses, and so on. So it's actually quite easy to extend the specification to higher speeds and wider lane configurations, unlike PCI, which we were stuck with at 32-bit/33 MHz forever, because wider/faster broke compatibility, and since PCI was a bus, you couldn't selectively run some slots faster.

To use analogies from recent hardware transitions, PCI to PCI Express was like moving from DDR to DDR2 SDRAM, and PCI Express 1.0 to PCI Express 2.0 is like moving from SATA 1.0 to SATA 2.0. DDR to DDR2 required completely new connectors, electrical specifications, etc., and you couldn't easily implement DDR and DDR2 together. SATA 1.0 and SATA 2.0, like PCI Express, use the same connectors, and both devices and controllers are backwards compatible (a SATA 2 drive can plug into a SATA 1 port, and vice versa, and you can mix SATA 1 and SATA 2).

At least in my experience, SATA 2 was much easier to adopt than changing all the RAM in my system. I just bought a new motherboard with a SATA 2 controller in it, and plugged in all my existing drives. Before I made the upgrade, I also had the option to buy SATA 2 drives which would upgrade in speed when I got my new motherboard. I've avoided transitioning from DDR to DDR2, though, because I'd need to replace all my gigabytes of RAM at the same time. I may skip a generation and go straight to DDR3, or at least AM3 which will support DDR2 and DDR3.

So yes, I think PCI Express 2.0 will be quickly adopted, at least at the enthusiast level. However, the situation may be much like SATA 2.0, where devices don't really take advantage of all the extra bandwidth. Even now, I believe PCI-E 16x still has a lot of headroom left in it, so we're not exactly crunched for bandwidth yet. Once PCI-E 2.0 goes into chipsets, though (and the chipset designers will probably put it immediately into their next generation sets), motherboard manufacturers won't have any disincentive not to support it, just like with SATA 2 ports.

Re:Sigh (4, Insightful)

mabinogi (74033) | more than 7 years ago | (#17625966)

I'd assume it'd be backwards compatible, similar to the AGP standards - in most cases you could stick any AGP card in an AGP 8x slot (as long as the motherboard still supported the voltages used by the older AGP versions, which was true in most cases).

If that's the case, then there's no barrier to adoption and manufacturers can just start cranking them out as soon as they're ready. It's only when a technology requires a completely new platform at multiple levels that adoptions is slow, and that was why PCIe took so long.

Re:Sigh (1)

be-fan (61476) | more than 7 years ago | (#17626002)

Since PCIe 2.0 is electrically compatible, the transition is going to be more like the one between AGP versions (which happened quite quickly), rather than the transition from PCI to PCIe.

But I want it now! (1)

Per Abrahamsen (1397) | more than 7 years ago | (#17626122)

I think you just canceled your geek card.

Some interest in the next generation of technology, rather than just what you can buy in the local store today, is required for membership.

Next few months? How about Q3? (2, Informative)

kinema (630983) | more than 7 years ago | (#17626314)

Intel is scheduled to start shipping their X38 (aka "Bearlake") chipsets Q3 of this year. The final v2 spec may have just been released but it's been in development for sometime allowing engineers to at least rough out designs. Also, much of the logic from previous v1.x chipsets can be reused as v2 is an evolution not a completely new interconnect standard.

Re:Sigh (1)

ozbird (127571) | more than 7 years ago | (#17627386)

Dammit! I was about to upgrade my computer, now I have to wait for PCIe 2.0...

Will somebody please think of the procrastinators?

Pci-e cable (0)

Anonymous Coward | more than 7 years ago | (#17627402)

I like the idea of the Pci-e cable, the workgroup is working on that with the speed of Pci-e 1.1 as bonus part of the Pci-e 2 spec. Why replace the expensive double dual DVI port with another single task port when you can run a 1.5 meter Pci-e cable to the big screen displays of tommorrow? If the display would have a framebuffer that is all you would need. Remember that AMD is working on bringing a GPU to the CPU, which could do some other math when not used a GPU.

Now what if you want some added 3D power on such a compact Pci-e 2 only system? Well stick a proper 3D card to the back of your display and plug the Pci-e cable into it.

--
Dennis Pennekamp

Re:Sigh (1)

BandoMcHando (85123) | more than 7 years ago | (#17627476)

Sure the specification is out, but it will take a long time I suspect to find its way into computers
Actually, the article states that:

Intel is expected to release its first PCIe 2.0 supporting chipsets, members of the 'Bearlake' family, next quarter.

Re:Sigh (1)

Civil_Disobedient (261825) | more than 7 years ago | (#17628118)

Is there something I am missing that will make this new standard magically find its way into computers in the next few months?

Multi-channel RAID adapters like those made by 3ware could benefit from the larger bandwidth.

Re:Sigh (1)

archen (447353) | more than 7 years ago | (#17628544)

That depends. For a simple mirror configuration I believe I worked it out that a 1 lane channel was enough bandwidth by itself. For more aggressive Raid 5 setups you will need more than one lane. That's why you could get a 4 lane card from a vendor like 3ware.

Here's the hitch though, many mainboard manufacturers are cutting corners and giving you a 4 lane slot with 1 lane bandwidth. So in a way this could help because of the crap the Chinese give us, the upgrade to PCIe2 will at least force the 1 lane to be twice as fast as it is now.

Re:Sigh (1)

SirLoadALot (991302) | more than 7 years ago | (#17628382)

Actually, Intel will have chipsets supporting it out next quarter (Q2). The preliminary spec has been pretty stable, so most companies should be well on their way to having the chips ready. I don't know how long before companies like nVidia sell cards for it, mind you -- but since it's backwards compatible, having the slots on your next motherboard has no downside.

Re:Sigh (1)

ossie (14803) | more than 7 years ago | (#17630484)

Probably not in the next few months, but vendors will probably have products out in 2008.

I know the /. crowd is primarily concerned with video performance, but there is a lot more to PCIe than just video. The new speeds will probably be more benificial for switch to switch PCIe connections.

There is a lot of cool stuff going on in the PCI-SIG, the SR and MR (single root and multi root) specifications for I/O virtualization are especially cool. SR allows an endpoint (PCIe device) to export virtual functions to a host running a hypervisor. So for example, this means that Xen could use SR to provide physical access to a fibre channel HBA to each guest and the guest would load the native driver for the HBA.

MR is really cool too - it allows multiple hosts to share the same PCIe device. So think of a blade chassis where you have PCIe slots in the back, all the blades could share a single 10Gbps NIC or HBA or whatever. This also has the added advantage of separating the I/O from the blade (currently any expansion devices have to go directly on the blade).

A small blurb on SR and MR can be found here:

http://www.pcisig.com/specifications/iov/ [pcisig.com]

http://www.techworld.com/opsys/features/index.cfm? featureid=2728&pagtype=samecatsamechan [techworld.com]

To answer your question... (1)

jd (1658) | more than 7 years ago | (#17630508)

Yes, no and maybe, but not necessarily in that order.

I attended the PCI SIG conference on virtualization for the new spec. There are two forms of virtualization that will (eventually) be supported - multiple operating systems on the same machine having access to their own private virtual PCI bus, and multi-mastered PCI busses where you can have multiple mainboards driving a virtual PCI bus that spans multiple machines.

The latter is a godsend for cluster builders - why bother with having tightly-coupled NICs on the far side of the PCI bus, when you can simply have the PCI bus carry everything directly to the endpoint? There's less work, less conversion, so less latency and fewer possibilities of data errors. Since it's 5 Gb/s per lane, you can also get speeds far in excess of those offered by most NIC vendors, if you're careful about how the bus divides up the bandwidth between nodes.

So, yes, I imagine the high-end HPC market will have machines that can do PCIe 2 in fairly short order.

Will anyone else see the benefits? Oh, I imagine high-end data centers will be interested. They can now double the width of striped disk arrays and not worry about bandwidth. Microsoft and VMWare will be urging rapid adoption, because of the virtualization abilities - it'll be faster than software virtualization and Microsoft gets to blame someone else if something goes wrong.

Intel will almost certainly be pushing the technology, as the spec allows for proprietary extensions. This means that they can build controllers that can work only with vendors they approve of (and AMD is unlikely to be one of them). Doesn't matter if Intel would or would not do that, what matters is that they can, which means if they're first to implement, they get a BIG stick to ward off rivals.

Nothing beats GPU in the CPU (3, Interesting)

suv4x4 (956391) | more than 7 years ago | (#17625754)

It'll be interesting to compare the performance of the built-in GPU unit in the new Fusion AMD processors, and the latest PCIe.

That said, of course PCIe has more applications than hosting a GPU card.

Re:Nothing beats GPU in the CPU (0)

Anonymous Coward | more than 7 years ago | (#17626192)

It'll be interesting to compare the performance of the built-in GPU unit in the new Fusion AMD processors, and the latest PCIe.

Not really. AMD's primarily targeting the mobile market with Fusion [hardocp.com] .

Re:Nothing beats GPU in the CPU (2, Insightful)

suv4x4 (956391) | more than 7 years ago | (#17626266)

Not really. AMD's primarily targeting the mobile market with Fusion

And...? I use a laptop, for example.

Also multiple CPU-s was primarily targeting the server market.. but look at the processors now. Two fully functional CPU cores in one processor, even for non-pro desktop machines.

Re:Nothing beats GPU in the CPU (0)

Anonymous Coward | more than 7 years ago | (#17626482)

High end GPUs by themselves are gigantic [vr-zone.com] . There's no way they'd be able to put a top fo the line GPU and CPU on the same die. They even said it isn't going to be for high end graphics in this slide. [tinypic.com]

Re:Nothing beats GPU in the CPU (1)

Andy Dodd (701) | more than 7 years ago | (#17627366)

Except that with multiprocessor systems, you had to have matched processors in almost all cases. The result was that if you upgraded one processor you were basically forced to upgrade all (with a few possible rare exceptions).

So moving them into a single package had no disadvantage - you couldn't upgrade them individually anyway.

Integrated graphics (GPU + CPU) is a different story, two components with a LONG history of being upgraded independently of each other.

Heck, even motherboard-integrated graphics (GPU + chipset) never fared well outside of the mobile, server, and super-lowend markets.

Re:Nothing beats GPU in the CPU (1)

TheRaven64 (641858) | more than 7 years ago | (#17627574)

Integrated graphics (GPU + CPU) is a different story, two components with a LONG history of being upgraded independently of each other.
I keep hearing this, but I've never actually seen it. I bought a PCI VooDoo2, but my next graphics card needed AGP, so I had to upgrade my motherboard and CPU as well. When I next came to upgrade, I really needed an AGP 4x motherboard for the card to at full speed faster. Then I started using laptops exclusively and had to upgrade the whole machine at once, but if I wanted to upgrade my old desktop's GPU, it would need a PCIe motherboard.

Re:Nothing beats GPU in the CPU (1)

Andy Dodd (701) | more than 7 years ago | (#17629538)

Interface upgrades don't happen too often.

How long was AGP around? At least 6 years in the mainstream (starting around 1998 when I built my first system for school, ending 1-2 years ago.). Yes, in many cases you would get better performance when upgrading GPU and CPU at once, but you could make a system last MUCH longer by swapping out video card only and not CPU.

I think my desktop's first incarnation went through 3 different graphics card iterations before a CPU upgrade, my current desktop incarnation will be going through its second graphics upgrade w/o CPU change.

Graphics chipsets have typically had a release cycle of 6 months or so between major revisions, (I think it's longer nowadays), as compared to 6+ YEARS for major compatibility-breaking bus changes, and maybe 1-2 years for "minor" bus improvements that don't necessarily warrant a "full blown" upgrade (i.e. AGP 4x vs. 8x, and PCIe 1.1 vs. 2.0)

R600/G80 can beat it (1)

Wesley Felter (138342) | more than 7 years ago | (#17630366)

High-end GPUs are so large there's no room to fit a processor on the same die. So Fusion will inevitably have lower performance than the best discrete GPUs.

hmm (1)

mastershake_phd (1050150) | more than 7 years ago | (#17625934)

the electromechanical specification is due to be released shortly.

I hope this means they will release the specification to the public unlike the AGP spec.

Re:public availability of spec (1)

sunderland56 (621843) | more than 7 years ago | (#17630338)

No. PCI specs are only available to members of PCI SIG; to get them or to develop hardware you need to be a member. If you thought AGP was secret, wait until you try to get technical details on PCIe.


Even then it isn't easy - my company is a member but it's easier for me to go to the store and buy a copy of the Anderson book on PCIe [amazon.com] than to get the official spec.

Who cares! (2, Interesting)

winchester (265873) | more than 7 years ago | (#17625958)

2.5 to 5 Gb is still "only" 250 to 500 MB (roughly). My SGI Octanes could do that 7 years ago! (And still do that regularly, for the record). So what's the fuss?

Re:Who cares! (4, Funny)

prefect42 (141309) | more than 7 years ago | (#17626232)

Anyone who pretends octanes are high performance (or even *were*) needs help. And I've got a pair in the cupboard.

Re:Who cares! (0, Offtopic)

Coucho (1039182) | more than 7 years ago | (#17627634)

And I've got a pair in the cupboard.
Oh yeah? Well I have a pair... in my pants!

Re:Who cares! (1)

cnettel (836611) | more than 7 years ago | (#17626892)

That's for one lane... 4x is quite normal for slots not intended for graphics, and these days you can find motherboards with 2 true 16x slots and one 16x physical/8x electrical slot.

Re:Who cares! (1)

TheRaven64 (641858) | more than 7 years ago | (#17627600)

Each slot may have up to 32 lanes. This means the maximum speed is going from 8000MB/s to 16000MB/s. Does this sound a bit more impressive?

Re:Who cares! (0)

Anonymous Coward | more than 7 years ago | (#17628436)

That's 500MB/s per lane. An x16 slot has 8GB/s per direction (16GB/s total). I don't think your Octane had that.

Re:Who cares! (0)

Anonymous Coward | more than 7 years ago | (#17629644)

Because SGI no longer exists (except in bankruptcy proceedings).

Slightly off topic.. (1)

Nazlfrag (1035012) | more than 7 years ago | (#17625968)

Radio frequency bluetooth style bus architectures with decent range set up in a fashion to share resources with nearby devices creating mainframe style computers for all to use. This should be the new standard for bus architectures, whaddaya think?

Re:Slightly off topic.. (1)

DragonTHC (208439) | more than 7 years ago | (#17626022)

the what if machine says it's a horrible idea.

Re:Slightly off topic.. (1)

gbjbaanb (229885) | more than 7 years ago | (#17626418)

you cannot get the bandwidth out of it - if you could, wireless monitors would be all the rage at the moment, but unless you want to run 640x480 it isn't going to happen. Also, interference is a problem, and while errors can be corrected out at low data transmission rates, if you try to pump massively more data through it, you will get problems.

Otherwise, I think its a great idea. I think its perfect for cars, get rid of the wiring loom and replace much of it with cabling only for critical parts, and all the non-critical bits could run off a wireless network which would reduce the weight of the vehicle.

Re:Slightly off topic.. (1)

TheRaven64 (641858) | more than 7 years ago | (#17627682)

You can get 300MB/s over 802.11n. This is more than enough for a lot of peripherals. My monitor runs at 1920x1200. In 32-bit colour, this is 9216000 bytes per frame, or 8MB. I would need 480MB/s to run the monitor at 60Hz (standard for TFTs) if the data is not compressed at all. Even doing some simple lossless compression it would be quite easy to get the data rate down to under 300MB/s.

Re:Slightly off topic.. (1)

gbjbaanb (229885) | more than 7 years ago | (#17629084)

Remember the difference between 300 megaBITS per second over 802.11n, and 480 megaBYTES that your calculations show you need for your display.

I'm not sure you can get 300Mbps over 802.11n, all the web pages I've just googled say 100mbps, possibly up to 200mbps in real world situations, but if we assume your monitor isn't that far away from the transmitter and you can get 200mbps, you're still quite a way under what you need (for 640x480x16@60 Hz = 350 mbps)

While these new wireless standards appear to offer incredible speeds, they're still some way from the 8 Gigabits per second a 16x PCIe bus offers, and DDR3 memory at 18Gbps bandwidth.

There might be more promise in 'intelligent' displays, where the graphics card is built into it, and drawing primitives are transmitted over air, but I doubt we'll ever get the display transmitted directly, not even for normal TV quality pictures (now everything is HD).

Another device that will support Vista (1)

DemoFish (1051816) | more than 7 years ago | (#17626072)

This is just another device us Linux users will have to pay the extra fee, for useless vista compatibility

Re:Another device that will support Vista (1)

dave420 (699308) | more than 7 years ago | (#17626398)

Yes. That's what happens when you are in a niche market. Capitalism is a bitch, huh?

It aint capitalism (1, Insightful)

Anonymous Coward | more than 7 years ago | (#17626684)

until you get rid of copyrights, patents and incorporation.

Oh, and laws like DMCA and the newer laws on madatory DRM.

They aren't capitalist. They're socialist.

Re:It aint capitalism (1)

dave420 (699308) | more than 7 years ago | (#17631206)

No, if it costs too much to port an application to another platform (development costs > predicted market taking), it isn't ported. The decision has nothing to do with socialism, copyright or DRM - the only factor is the price. If it's too expensive, it doesn't happen. Linux's market share is TINY compared to windows, so if it costs similar amounts of money to develop an application for Windows and Linux, and Windows's market share is 10x that of Linux, Linux gets ignored. I don't know how you can attribute it to anything else, but please enlighten me.

Re:Another device that will support Vista (0)

Anonymous Coward | more than 7 years ago | (#17627030)

Why didn't you just admit to yourself that you don't know what PCIe is, and save yourself the hassle of posting such a stupid comment?

Re:Another device that will support Vista (1)

Andy Dodd (701) | more than 7 years ago | (#17627388)

How is that?

Other than being a "bump" of PCI Express, it is no different from PCI Express. It is most definately no different in terms of licensing and implementation in the O.S.

Actually, the nice thing is that even PCI Express was no different from PCI at the OS level. To an operating system, PCI Express peripherals just appear as really fast PCI peripherals - at that level of abstraction they are the same.

PCIe 1.0 and 1.1 are perfectly supported under Linux, why would 2.0 be any different?

Re:Another device that will support Vista (0)

Anonymous Coward | more than 7 years ago | (#17628306)

I entirely agree with you on this one and there was a great cost analysis published earlier this year by Peter Gutmann I will quote:

... a graphics chip is integrated directly into the motherboard and there's no easy access to the device bus then the need for bus encryption (see "Unnecessary CPU Resource Consumption" below) is removed. Because the encryption requirement is so onerous, it's quite possible that this means of providing graphics capabilities will suddenly become more popular after the release of Vista. However, this leads to a problem: It's no longer possible to tell if a graphics chip is situated on a plug-in card or attached to the motherboard, since as far as the system is concerned they're both just devices sitting on the AGP/PCIe bus. The solution to this problem is to make the two deliberately incompatible, so that HFS can detect a chip on a plug-in card vs. one on the motherboard. Again, this does nothing more than increase costs and driver complexity ...

http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_c ost.txt [auckland.ac.nz]

When will it be enough?

Fantastic news!! (4, Funny)

waynemcdougall (631415) | more than 7 years ago | (#17626104)

This is a firmware upgrade, right?

What Dept. (1)

Reverse Gear (891207) | more than 7 years ago | (#17626584)

Why does this post not come from any department?

How am I able to see how trustworthy posts like this are when I don't know where they are from? ;)

Wrong units (1)

reking2 (813728) | more than 7 years ago | (#17627488)

In your article, you say "The upshot: a x16 connector can transfer data at up to around 16GBps." (GigaBytes per second) Everyone else is reporting as 16gbps (Gigabits per second). I realize it is only a factor of eight off, but I expect better from /.

Re:Wrong units (1)

onemorechip (816444) | more than 7 years ago | (#17631938)

16 Gigabits per second (Gb/s or Gbps, not gpbs) on an x16 would be 1 Gb/s per lane. This spec goes to 5 Gb/s per lane (per direction), so your figure is off by a factor of 5 (or 10, if you consider the case of both directions simultaneously saturated).

The reason it's 16 (for full duplex) and not 20 is that 8b10b encoding requires 10 bits on the serial link to encode 1 byte of data.

SO what does this mean actually (1)

hesaigo999ca (786966) | more than 7 years ago | (#17627776)

I did not get the following from the article,

Does this mean it will work on my computer now if i get a firmware upgrade, or do I need to replace the part on the motherboard with a newer one to allow me this new speed.

If I need an firmware upgrade, will I get it from windows, from the motherboard manufacturer, or just any site will do.

If I do need to buy it, how long before any cards are made and what price can we expect to pay.

I can get a P4 used, with all the bells and whistles for about 200$ CND -minus the dvd burner(another 50$)...if this is going to cost more then 100$, I wont bother,
I don't need the speed enough to go faster at reading my emails, or
checking the latest p00rn websites

Math??? (0)

Anonymous Coward | more than 7 years ago | (#17629466)

How does an x16 get 16 Gigabytes/s with a rate of 5 Gigabit/s that seems to me to be 16*5Gb/s=80Gb/s=10GB/s since we still have to cram 8 bits into a byte.

Re:Math??? (2, Informative)

onemorechip (816444) | more than 7 years ago | (#17632192)

80 Gb/s would be the half-duplex bandwidth. Full duplex is 160 Gb/s (if you can find an application to utilize all of both directions). PCIe uses an encoding of 10 bits to the byte, for numerous technical reasons but primarily to maintain a DC balance (50% ones, 50% zeros) and to ensure maximum run lengths so that the clock (embedded in the serial stream) can be recovered at the receiving end. 160/10 = 16 GB/s.

cards on the HyperTransport is better (1)

Joe The Dragon (967727) | more than 7 years ago | (#17629904)

How long in tell we start to see HTX video and other cards?

Re:cards on the HyperTransport is better (1)

Wesley Felter (138342) | more than 7 years ago | (#17630464)

Never. The marginal benefit of HTX is outweighed by the cost of making two different variants of the GPU.

HTX should really be renamed to the PathScale slot, since they're the only ones who use it (and probably the only ones who ever will).
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?