Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×

271 comments

Sorry! There are no comments related to the filter you selected.

OMG (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#11734084)

The requested URL (articles/05/02/21/0427230.shtml?tid=137&tid=118&t id=95) was not found.

If you feel like it, mail the url, and where ya came from to pater@slashdot.org.

Were you born a lazy sack of shit? (-1, Troll)

Anonymous Coward | more than 9 years ago | (#11734085)

Or did you have to work on it?

Re:Were you born a lazy sack of shit? (0, Funny)

Anonymous Coward | more than 9 years ago | (#11734346)

I accelereate TCP/IP stacks...

with my ASS!

Great (0, Offtopic)

g8way (547878) | more than 9 years ago | (#11734096)

Yet another processor that requires liquid nitrogen.

Good stuff! (5, Interesting)

kernelistic (160323) | more than 9 years ago | (#11734099)

First checksum offloading, now this... It is nice to see that hardware vendors are realizing that 10Gbit/s+ speeds aren't currently realistic without extra forms of computation support from the underlying network interface hardware.

This is Good News.

Re:Good stuff! (5, Informative)

RatRagout (756522) | more than 9 years ago | (#11734202)

Yes. Checksum was one of the problems. The other problem is the memory-to-memory-copying of data due to the semantics of the tcp/udp-send() call. This semantics require that the data existing in the memory location at the time send() is called is the data to be sent. If the application changes the data directly after the send()-call this should not affect what is sent. This means that the OS has to copy the data into kernel memory, and then at some later time copy it onto the nic. This memory-to-memory-copying becomes a severe problem when the traffic and bandwidth increases

Re:Good stuff! (5, Informative)

kernelistic (160323) | more than 9 years ago | (#11734227)

There have been multiple fixes to address the inefficiencies of the original design of the BSD TCP/IP stack.

FreeBSD for example, has a kernel option called ZERO_COPY_SOCKETS, which dramatically increases network throughput of syscalls such as sendfile(2). With this option enabled, as the name entails, data is no longer copied from userland to kernel space and then passed onto the network card's ringbuffers. It is copied in one swoop!

Re:Good stuff! (2, Interesting)

RatRagout (756522) | more than 9 years ago | (#11734268)

For sending of files I'm sure this has increased performance greatly as you when sending a file might have to first read the file into userland, copy into kernel and then onto nic. Reading directly from disc to a TOE would of course be the real overhead-killer. Zero-copy techniques are also done for newer APIs like uDAPL for RDMA-operations (over InfiniBand or similar).

Re:Good stuff! (1)

should_be_linear (779431) | more than 9 years ago | (#11734233)

I also noticed that many Enterprise servers most of it's CPU power spend parsing XML. I wonder why nobody (Intel, AMD) have hardware aid for this. It would also have huge PR benefits in Enterprise/SMB market. I guess it would be utf8 encoding only, but thats not limiting at all, is it?

A good thing (-1, Troll)

qwerty55 (858835) | more than 9 years ago | (#11734105)

With the ever growing popularity of networked and distributed computing, I think a technology like this may yeild some real benefits.

Re:A good thing (1)

SinaSa (709393) | more than 9 years ago | (#11734180)

With the ever growing popularity of fluff statements like this one, I think a statement like the parent may yield no real benefits to this discussion.

Re:A good thing (0, Offtopic)

Jugalator (259273) | more than 9 years ago | (#11734219)

With the ever growing wishes by some to get first posts, I think the little time to write a post may yield that kind of quality.

Re:A good thing (5, Funny)

Quobobo (709437) | more than 9 years ago | (#11734187)

Newly discovered, a simple and easy karma-gaining method! Amaze your friends, and become more eligible to moderate!

1. Refresh your browser constantly until there's a new story on Slashdot, to post before everyone else.

2. Post something similar to "This is good/bad, for INSERT_OBVIOUS_REASON_HERE. And fuck the INSERT_RIAA-LIKE_ORGANIZATION_HERE." (second sentence is optional)

Attn MODS. (1, Redundant)

DAldredge (2353) | more than 9 years ago | (#11734213)

I will do this slowly so you can understand.

HE
DIDN'T
SAY
A
DAMN
THING!

Re:A good thing (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11734337)

Reading that post made me stupider. Oh no! Look what you've done! My brain is coming out of my head! [penny-arcade.com]

Now, the Pentium V .. (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#11734106)

REALLY accelerates the internet :D

Re:Now, the Pentium V .. (-1, Troll)

Anonymous Coward | more than 9 years ago | (#11734141)

In Soviet Russia, the internet accelerates Intel.

finally... (5, Funny)

N5 (804512) | more than 9 years ago | (#11734109)

intel is working on something worthwile: a cure for the common slashdot-ing

and they say the drug companies are miracle workers ;)

White elephant? (5, Interesting)

Toby The Economist (811138) | more than 9 years ago | (#11734112)

I think in Tannenbaum's book there's a reference which states that offloading network processing normally isn't useful, because the CPU that work is offloaded to is always less powerful than the main CPU and the main CPU is normally blocked in it's task until the network processing has completed.

--
Toby

Re:White elephant? (0)

Anonymous Coward | more than 9 years ago | (#11734133)

Of course, all CPU's/IC's/ASIC's are equally as powerfull as another, spefici instructions/clock speed/whateverthe fuck be damned, right? :P

Re:White elephant? (1, Insightful)

Anonymous Coward | more than 9 years ago | (#11734134)

Doesn't matter. Intel is eyeing AMD's success at courting the ricer community and trying to horn in on that action.

Re:White elephant? (2, Informative)

Uhlek (71945) | more than 9 years ago | (#11734139)

That all depends on how it's done. Simply offloading the processing won't work, but replacing the TCP/IP drivers with simple hooks into a hardware-based I/O system can.

Re:White elephant? (5, Informative)

Toby The Economist (811138) | more than 9 years ago | (#11734168)

You must imply that the hardware implimentation will be faster than the main CPU, which it almost certainly won't be, because if you've just spent 300 USD on your P4 CPU, what are you doing spending the same amount again - or more - just on your network subsystem?

Also remember that a well implimented TCP/IP stack runs at about 90% of the speed of a memcpy() (Tannenbaum's book again).

For hardware TCP/IP processing to be useful, you need to be say 2x the speed of the CPUs memcpy() function!

Given that the main performance bottleneck is memory access, since you're basically copying buffers around and so caching isn't going to help you, I don't see how any sort of super-duper hardware is going to give you anything like a 2x speed up, let alone at an economic price.

--
Toby

many white elephants (1)

Joseph_Daniel_Zukige (807773) | more than 9 years ago | (#11734197)

Think 80186, ergo, "io co processing instructions". ;-)

Re:White elephant? (1)

MatthewNewberg (519685) | more than 9 years ago | (#11734205)

Is there anyway the could reduce the amount of data in the stack?

Re:White elephant? (5, Informative)

Uhlek (71945) | more than 9 years ago | (#11734312)

Hardware implementation will most definitely be leaps and bounds faster than the general CPU. Can a Linux router route 720Gbps of traffic through hundreds of interfaces at once? No. But a Cisco 6500 can, because of hardware designed especially for the task.

Simply put, software on general purpose processors sucks for doing heavy computational work. Hardware tuned especially for a task has, and always will, be where it's at. However, the costs involved in creating ICs specific to a task usually mean that ASICs are only created where there is a need. Modern graphics cards are a great example. The on-board graphics processors are designed especially to create graphics, something that, if offloaded onto the GP CPU, would crush even the highest of the high end.

Also, offloading the TCP/IP stack on a normal workstation probably isn't going to be a huge performance boost. Where this will be useful is in situations where there is a need for high-throughput, low-latency network I/O processing.

Re:White elephant? (-1, Troll)

Anonymous Coward | more than 9 years ago | (#11734148)

Tannenbaum? The guy who sell kitchen gear for $9.99 on the ad channel at 3am?

Re:White elephant? (0)

Anonymous Coward | more than 9 years ago | (#11734169)

The CPU is never blocked on i/o in a modern operating system. The i/o is scheduled and the completion is managed asychronously.

Re:White elephant? (2, Informative)

Toby The Economist (811138) | more than 9 years ago | (#11734182)

Any given thread which needs network I/O cannot continue until that I/O is complete. The fact the CPU can switch elsewhere makes no difference to the thread which requires the network packet to be processed before it has the information it requires to continue, and if that processing is offloaded to a slower network processor, the performance of that thread is degraded.

--
Toby

Re:White elephant? (0)

Anonymous Coward | more than 9 years ago | (#11734410)

My operating system is not DOS and I have multiple threads running on the CPU. So clearly offloading network I/O to extra hardware frees up my CPU to do something more useful, for example providing dynamic web content, or scaning incoming email for SPAM, or hell just redrawing damaged window regions on my desktop.

I'm not really picking on you, I just thought I'd point out that this idea that "You can't offload IO to hardware because the process is blocked on that IO anyway!" is very outdated. It may have made some sense in 1983 when Minix was written but it doesn't make any sense today.

Re:White elephant? (4, Interesting)

mr_zorg (259994) | more than 9 years ago | (#11734203)

I think in Tannenbaum's book there's a reference which states that offloading network processing normally isn't useful, because the CPU that work is offloaded to is always less powerful than the main CPU and the main CPU is normally blocked in it's task until the network processing has completed.

I think in xyz's book there's a reference which states that offloading graphics processing normally isn't useful, because the CPU that work is offloaded to is always less powerful than the main CPU and the main CPU is normally blocked in it's task until the graphics processing has completed.

See how silly that sounds when you substitute network with graphics? We all know that offloading graphics processing is a good thing. Why? Because it's optimized for the task. Why couldn't the same be done for networking?

Re:White elephant? (3, Interesting)

Joseph_Daniel_Zukige (807773) | more than 9 years ago | (#11734245)

See how silly that sounds when you substitute network with graphics?

Well, does waiting 3 milliseconds at 3 GHz outrun waiting 3 milliseconds at 300 MHz?

The only advantage I can see to this is that it's often nice to have I/O handled in a separate process/thread running on a separate processor. But, as many have already noted, unless the I/O processor is tuned for this you've either got another expensive processor or you're running the I/O thread on a slower processor.

If the processor _is_ tuned for this purpose, it's already been done. Most Ethernet i/f cards have a fair amount of intelligence on them already, and complete stacks have been available on cards for about as long as I've been aware of ethernet. (twenty years?)

Re:White elephant? (5, Interesting)

Jeff DeMaagd (2015) | more than 9 years ago | (#11734248)

Graphics and networking are two very different things. Networking isn't compute intensive, it is I/O intensive. I don't think the Intel hardware network offload is for much more than basic computation.

Besides, GPUs are more powerful than CPUs at the task of rendering polygons.

Very often ASICs are better at a task than general purpose CPUs, just that considerations must be made as to whether the performance gain is worth the cost difference.

Re:White elephant? (4, Informative)

Toby The Economist (811138) | more than 9 years ago | (#11734307)

You can accelerate graphics to a very large degree because the problem is very subject to parallelism.

You cannot accelerate networking very much because the problem is highly serial.

It is improper to compare the two because they are fundamentally different problems.

You can throw tons of hardware at 3D graphics and get good results, because just by having more and more pipelines, you go faster and faster.

Processing a network packet is quite different; the data goes through a series of serial steps and eventually reaches the application layer. The only way you can really make it go faster is to up the clock rate, and you find it's uneconomic to try to beat the main CPU, which remember has *already* been paid for. You have all that CPU for free; to then spend the kind of money you'd need to outpace the CPU makes no sense, let alone the money you'd need to spend to outpace the CPU by a decent margin.

--
Toby

Lots of people agree, including AC and DM (4, Informative)

Anonymous Coward | more than 9 years ago | (#11734211)

AC being Alan Cox, DM being Dave Miller.

Read Alan's opinion here [theaimsgroup.com] .

Read Dave's opinion here [theaimsgroup.com] .

There has been discussion of this specific Intel announcement here [theaimsgroup.com] .

Re:White elephant? (1)

UniverseIsADoughnut (170909) | more than 9 years ago | (#11734241)

Not that some things Intel does isn't marketing driven. I doubt they would go about doing this if they didn't have good reason to.

It's not like this would be an easy thing to sell in some way that people would really understand very well. But regardless they aren't going to develop a whole new piece of hardware that is worthless. Making a design decision that pushes something down a bad path like clock speed is a whole different issue. I'm pretty sure intel guys would think this one out before spending a ton of cash working on it.

I think intel is realizing that the future is much brighter in them delivering better hardware solutions for distributing out the computer to many parts instead of everything done in software in the CPU. Not only does it mean their cpus don't have to work as much thus less heat and power draw, but it also means they get to sell more chips. And specialized hardware will always be faster then software. The CPU should be used for stuff that can't be easily hardware done, or for emerging things. Once it gets well define it should get moved out to it's own chip.

I just picture them moving towards more centrenio type families

Re:White elephant? (1)

JollyFinn (267972) | more than 9 years ago | (#11734250)

What the heck. Few factoids
The main CPU runs multiple things.
The cost of network traffic are cache flushes and context switches. And so on.
General purpose CPU is much weaker than special purpose CPU, if you can parallerize at all.
And MFG costs my ass. These things should be relatively small.

Think following scenario.
Network interrupt->context switch-> move lot of data around and compute some what-> context switch.
To finish what I was doing, and then compute the thing that I just put in the line. (unless some other processor does it first ;)
VS
I finished of doing previous thing, I'll check if there is anything new for me to crunch on, if not I'll yield the processor voluntarily in case some other thread needs it, but there is something so I can continue running my code from trace cache that would of been flushed in context switch...
I can see order of magnitude difference on those two approaches. Remember TLB miss is REALLY expensive, as a instruction cache misses, and getting stuff from mainmemory.

Re:White elephant? (0)

Anonymous Coward | more than 9 years ago | (#11734274)

Consider the fact that you can get extremely high performance for a high specialized task using an FPGA. It's conceivable that they could create hardware designed for these specific tasks that could outperform the main CPU (for these specific tasks) without requiring transistor densities as great as those needed for the top of the line general purpose processors.

For some tasks a general purpose processor can be at a disadvantage since it carries "dead weight" unneccessary for the task.

Re:White elephant? (2, Insightful)

Trogre (513942) | more than 9 years ago | (#11734279)

Try telling that to Amiga fans in 1989-1992.

Those little boxes were masters at multi-processing, and they did it right - one processor for pretty much every major peripheral task (disk, graphics, sound, something else I can't remember).

As long as these Intel coprocessors are going to be an open standard (which they almost certainly won't), then I'd welcome this addition to PC architecture.

And the CPU doesn't have other things to do? (3, Insightful)

Moderation abuser (184013) | more than 9 years ago | (#11734323)

My boxes all run tens to hundreds of processes for tens to hundreds of people. Offloading the processing to a networking subsystem isn't going to hurt, especially with gig and 10gig.

Not that this is a new idea. It's been done for donkey's years.

Is that the same Tannenbaum that said.... (1)

droopycom (470921) | more than 9 years ago | (#11734408)

... that Linux was an obsolete design ?

If so, I will beware any bold predictions he make.

He might be right in theory I guess ... but in practice ?

Fastest network card EVAR (4, Funny)

Anonymous Coward | more than 9 years ago | (#11734116)

I was one of the lucky few who beta tested this. The plus side is you can overclock your network card to download faster than the remote server bandwidth. I did not try it, but I would be able to slashdot the slashdot.org website just by browsing it.

Security updates (4, Funny)

KiloByte (825081) | more than 9 years ago | (#11734119)

As we know it damn well, shit happens all the time.

So... how exactly are they going to ship patches in the case of a security issue?

Re:Security updates (3, Informative)

TheRagingTowel (724266) | more than 9 years ago | (#11734347)

Flash memory. It's been done all the time.

the good, the bad, the ugly? (1, Interesting)

Interfacer (560564) | more than 9 years ago | (#11734124)

It seems such an obvious thing: make a tcp/ip processor, put it on a NIC and give it a high level interface, instead of just a low level IP interface.

makes you wonder why nobody has done it before...

maybe this is some plan of intel to control the internet: add some secret DRM capability to it, wait until everyone until everyone is using it, and then take over the world.
Or -door number 2- sell your services to the NSA.

Re:the good, the bad, the ugly? (0)

Anonymous Coward | more than 9 years ago | (#11734170)

It has been done before.

Re:the good, the bad, the ugly? (2, Interesting)

DietCoke (139072) | more than 9 years ago | (#11734206)

The problem is that you're still dealing with a bottleneck at the system bus, AFAIK. I installed a CAT-6 network at home today and had to do quite a bit of reading to determine whether it was worth doing. I read in numerous places that with gigabit network that you essentially need a 1Ghz processor just to keep up with the data coming in. Now, placing that processor on the NIC might make sense, but it would seem to me that it'd still have to be at least equal to the processor to be able to handle the data in a steady stream.

I can't claim to be an expert in this subject, but that's the situation as I've understood it.

Re:the good, the bad, the ugly? (3, Insightful)

pc486 (86611) | more than 9 years ago | (#11734223)

I can't believe the parent got modded up. This kind of thing has been done before (RTFA. Yeah yeah, I know. I must be new here...). It's called TOE (TCP Offload Engine) and many networking companies have done TOE. However, most cards are expensive and don't have much support across platforms.

What's new here is that Intel wants to put this in their chipsets everywhere and not just in $700+ NICs. Already this has been happening with checksum offloading, TCP fragmentation, smart interrupts, and so on in most GigE chips.

So yes, people have done this before and have been since at least 2000.

As far a DRM is concerned, look at the NIC market and look at the TCP/IP spec. TCP/IP? Standard and anything non-standard won't work with stuff that's out there. Wierd NICs? I've been getting Linux source-code drivers for even the cheapest of cheap NICs for years now. There's too much competition to sneak in something restrictive.

Re:the good, the bad, the ugly? (1)

rf0 (159958) | more than 9 years ago | (#11734375)

The more things are abstracted from the user the less we know about what is going on. Of course I'm being totally paranoid but it does open the way to easier ethernet tapping I suppose

Rus

Re:the good, the bad, the ugly? (2, Interesting)

igb (28052) | more than 9 years ago | (#11734385)

It's been done many times before. A company called CMC made a 3U VME board which provided full TCP offload to System V machines --- I ported it into an SVR3 system and ported Lachman's NFS product to run over it. Sun shipped an Omniserve (or somesuch name) product as the NC400 and NC600 for the 4/4X0 and 4/6X0 range which offloaded quite a lot of NFS and XDR protocol overhead, as well as some of TCP. Neither of these products was unique.

Less generically, the original Auspex NFS servers had distinct boards for Ethernet, Network and File processing, which managed to do TCP offload _and_ zero copy.

With the exception of the Auspex example, most of these cards were rapidly obsolete because the overhead of copying the network traffic to and from the offload card is greater than the work involved in doing the processing. You can't do a zero-copy without a huge amount of scaffolding in the OS.

Anyway, 3Com had a card which did this a couple of years ago. It sank without trace.

ian

Ethernet controllers (3, Interesting)

Anonymous Coward | more than 9 years ago | (#11734127)

What is needed more is a high-speed bus for network interfaces, as gigabit ethernet becomes more common. Even if a gigabit adapter had a whole 32-bit PCI bus to itself, it could still easily saturate it.

It seems like most common denominator board manufacturers have put off 64-bit PCI support for too long. It's going to bite them in the ass if it doesn't become standard very soon.

Re:Ethernet controllers (1)

kernelistic (160323) | more than 9 years ago | (#11734163)

This has come to be in the server space. Select servers (Usually mid-level and higher) from the likes of Dell have had 64-bit PCI slots in them for at least 4 years now.

It is becoming more common to see onboard ethernet cards in user systems as it frees up a PCI slot. There isn't any reason (Cost aside) as to why these cards could not be interfaced to existing 133-Mhz PCI-X bridges.

Remember that a 64-bit bus alone does not give you extra throughput. Transfering data at higher clock rates, on edge and level will. There are even 64bit/33Mhz slots around and they offer very little advantage over 32bit/33Mhz ones...

Re:Ethernet controllers (5, Insightful)

afidel (530433) | more than 9 years ago | (#11734167)

No, a gigabit adapter can't saturate a PCI bus by itself, 32bit 33MHz PCI is 133MB/s, gigabit is 100MB/s. Then there is 32bit 66MHz PCI, and if you want you could run a 32bit card at 133MHz as the standard supports it (though I've never heard of such a card, if you need 133MHz you generally also need 64bit but I assume a ADC could use the faster speed but not need the wider word size. The fastest current implementation of the slot local bus is 16 channel PCI-express which could handle 4 10gigabit adapters. The problem would be coming up with enough data to keep those pipes full, no disk subsystem is fast enough, and any meaningfull SQL transactions are going to be CPU limited on even the bigest of servers, so why would you need a bus with more bandwidth than that? Add to this the fact that servers which actually need more throughput have long had the faster PCI slots and you realize that it's not a problem in the real world.

Re:Ethernet controllers (2, Informative)

Anonymous Coward | more than 9 years ago | (#11734232)

You got the PCI bandwidth correct, but you're gigabit bandwidth is a hair off. Depending on how you define "giga" (base 10 or base 2), you get the following numbers:

a) Gigabit/sec = 1000 Mbit/sec = 125MByte/sec
b) Gigabit/sec = 1024 Mbit/sec = 128MByte/sec

True, even these speeds don't completely saturate the PCI bus, though because of how the PCI bus is shared (each device gets a few clock cycles to do it's thing before passing control off to the next device) no single device could anyway unless it's the ONLY thing on the PCI bus. It certianly will saturate (or come dang close to it) when it has it's moment of control though.

Re:Ethernet controllers (0)

Anonymous Coward | more than 9 years ago | (#11734300)

And if the majority of the ethernet adapter's bus traffic is data, you can bet that whatever's left will be taken up by instructions.

Re:Ethernet controllers (0)

Anonymous Coward | more than 9 years ago | (#11734306)

My understanding of the PCI bus is limited at best, but don't most net adapters have bus mastering capability? It'll just slap the PCI bus controller in the face when it says "hey you, time's up!"

Re:Ethernet controllers (0)

Anonymous Coward | more than 9 years ago | (#11734364)

They are talking about network interfaces, in the network world prefixes are base 10.

Re:Ethernet controllers (0)

Anonymous Coward | more than 9 years ago | (#11734378)


No, a gigabit adapter can't saturate a PCI bus by itself


And what about full duplex mode...?

Re:Ethernet controllers (1)

lachlan76 (770870) | more than 9 years ago | (#11734387)

There is more than one device on the PCI bus...

Re:Ethernet controllers (1)

zackeller (653801) | more than 9 years ago | (#11734172)

I see it being bypassed by PCI-E. Even PCI-E 1x is fast enough for a gigabit interface, and it's already on almost all new motherboards. We'll see how well it does once cards actually come out for it.

Re:Ethernet controllers (0)

Anonymous Coward | more than 9 years ago | (#11734246)

Most PCI-E Motherboards will have Gigabit onboard.

Im not sure about the quality (=speed/reliability) of these inbuilt interfaces though.

Re:Ethernet controllers (1)

Jeff DeMaagd (2015) | more than 9 years ago | (#11734262)

Most currently sold chipsets provide a network interface right into the chipset as its own port, bypassing the PCI bus. The same is done with on-board IDE/ATA/SATA controllers, audio, USB, Firewire and such.

nvidia (5, Interesting)

Ecio (824876) | more than 9 years ago | (#11734130)

Isnt Nvidia doing the same with his new nforce serie motherboards? lowering cpu usage by adding network management code and a SPI firewall inside the chipset?

Re:nvidia (1)

Intocabile (532593) | more than 9 years ago | (#11734149)

I Was going to say the same thing.

Re:nvidia (0, Offtopic)

bersl2 (689221) | more than 9 years ago | (#11734160)

From what I've heard, nVidia's implementation is sucking major ass.

Re:nvidia (2, Interesting)

MatthewNewberg (519685) | more than 9 years ago | (#11734166)

I've used both Nvidia, and 3com, and switched back and forth so many times(I had both unboard untill the board fired).. It doesn't seem to effect anything at all(including cpu usage). Then again I wasn't pushing more then 10mbits/sec accross the network or using a lot of connections.

Interesting (4, Insightful)

miyako (632510) | more than 9 years ago | (#11734132)

This seems interesting, though given intels track record I wonder if it will really be as useful as they are speculating, as the article has no real technical information.
Granted, I've never administered a server that was under anywhere remotely near the types of loads we are talking about for this to be useful, but I have a hard time imagining that dealing with the TCP/IP stack would be more intensive than running applications (as the article claims).
So, far all you people out there much more qualified to discuss this than I am, will having some part of the processor dedicated to handling TCP/IP really speed things up, or is this primarily a marketing technology?

Re:Interesting (2, Insightful)

AutumnLeaf (50333) | more than 9 years ago | (#11734228)

I've seen extremely beefy NFS file-servers go into a crash-reboot-crash cycle after the first crash because all of the hosts trying to remount the filesystem completely crush the machine before it is fully up to speed. We've had to unplug the network cables on the server to prevent the mount storm for killing the server again.

Note, this is enterprise-grade hardware hooked up to million-dollar disk arrays.

Now, is that entirely from dealing with the networking stack? No. Not quite. However, consider this. It takes time to checksum headers and data. It takes time unwrap packets. If you have a ton of clients raining requests for data on your server, it's not hard to see that dealing with the networking bookkeeping could impact the throughput of requests. ie: Database servers and web servers are two things that come to mind here in addition to file servers.

Btw, note that this another part of the "platform" initiative/orientation. While Intel's track-record has not been great in many respects, they do have a good track-record of success with "platforms." eg: Centrino was a "platform."

Re:Interesting (1, Insightful)

Anonymous Coward | more than 9 years ago | (#11734377)

Patch your OS, it should not crash due to high load, ever.

Qlogic TOE cards (5, Informative)

jsimon12 (207119) | more than 9 years ago | (#11734151)

Uh, this isn't new, Qlogic has been doing it for some time now, in there TOE cards (TCP Offload Engine) [qlogic.com] . The cards are smoking, especially on Solaris, cause Sun's TCP stack is crappy.

Re:Qlogic TOE cards (1, Insightful)

Anonymous Coward | more than 9 years ago | (#11734242)

I'm guessing with sweeping comments such as Sun's TCP stack is crappy you've extensively tested solaris 10? nice to know theres people giving expert opinions on cutting edge software so that people like me dont have to form factually based opinions

Re:Qlogic TOE cards (0)

Anonymous Coward | more than 9 years ago | (#11734326)

And Linux's TCP stack is made of gold? I guess that's why it gets torn out and replaced every two years.

Re:Qlogic TOE cards (2, Informative)

incubuz1980 (450713) | more than 9 years ago | (#11734357)

The Solaris TCP/IP stack has been greatly improved in Solaris 10. There really is a BIG difference compared to older versions of Solaris.

Re:Qlogic TOE cards (0)

Anonymous Coward | more than 9 years ago | (#11734366)

I wasn't arguing against Solaris. Solaris has had a great TCP/IP stack for years, despite its shortcomings in the past *cough* sequence number generation.

Hunter S. Thompson is dead. (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11734156)

I just heard some sad news on talk radio - author Hunter S. Thompson was found dead in his Colorado home this morning. There weren't any more details yet. I'm sure we'll all miss him, even if you weren't a fan of his work there's no denying his contribution to popular culture. Truly an American icon.

Re:Hunter S. Thompson is dead. (-1, Offtopic)

RyuuzakiTetsuya (195424) | more than 9 years ago | (#11734278)

I wonder if he shot himself trying to fight off a swarm of imaginary bats.

Gidget Goes to Heaven: Sandra Dee is dead (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11734322)

Sad news keeps on coming. Tammy tell me true. God, I loved her.

Bonnie Raitt's pop passed on too. Broadway star John Raitt just bought the farm.

Nothing to see here (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#11734164)

This is a complete waste of time. My latency isn't due to processing in my CPU. It's due to cable modem technology and all the switches the shit has to go through before it gets to me. Honestly, I don't see this having any benefit whatsoever except for special machines that deal with an extraordinary amount of network processing. Event then, it's likely to be near worthless since the server will undoubtedly be far more held up by disk access and other significantly slower operations.

I'll take any speed boosts Intel wants to throw my way but I think their efforts would be better spent elsewhere.

Deja vu? (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#11734173)

Heh, sounds like an ethernet version of a Winmodem.

Re:Deja vu? - EXACTLY! (0)

Anonymous Coward | more than 9 years ago | (#11734253)

Marketese is all.

yeah great (5, Funny)

Anonymous Coward | more than 9 years ago | (#11734186)

soon it will be dedicated processor and RAM to deal with tcp, then a dedicated processor for the keyboard input, then a dedicated processor for the fans and a special dedicated processor on 12" PCI-X card for the extremely computationally intensive MOUSE, actually this will have it's own special dedicated path call 'AMP' or Accelerated Mouse Port. Mice of the future will need much more bandwidth than today. About 16 GB i/o so they need their own data paths.

And then there will be other enhancements like the tcp/ip one.

For instance a special accelerator card for Word and Internet Explorer will be developed.

Furious Linux users will demand their own technology, so one manufacurer will come up with a special card for running GNOME apps. This card will have 4 duel core 6 Ghz processors and allow Gnome to run at normal speeds.

Re:yeah great (1)

burns210 (572621) | more than 9 years ago | (#11734257)

I always thought having components offloaded to their cards(the way OS X offloads video the video car). Network offload to the NIC, sound to the sound card, etc. Why not? Given that 100mhz+ processor are becoming dirt cheap, and their ability to take on processor load only makes sense, freeing time for the system CPU to move on to better things.

Re:yeah great (1)

myspys (204685) | more than 9 years ago | (#11734390)

you know the end of this story, don't you?

the amiga, of course!

dead it might be, but it was still a beatiful design!

Will it support IPv6? (4, Interesting)

arc.light (125142) | more than 9 years ago | (#11734193)

The article doesn't say, and I'd hate to be "stuck" with a card that only does IPv4. Yeah, I know, hardly anyone uses IPv6 today, but the nations of China and Japan, as well as the US DoD, are starting to roll out IPv6 networks in a big way.

finally! (-1, Troll)

Anonymous Coward | more than 9 years ago | (#11734198)

I can run that OC192 through my main computer!

overclock :D (0, Redundant)

Thesi (853426) | more than 9 years ago | (#11734212)

can I overclock it?

So, now hackers will target your BIOS rather than (3, Interesting)

ABeowulfCluster (854634) | more than 9 years ago | (#11734240)

targeting the OS. I can see this technology being useful on servers which have multiple network cards and heavy traffic, but not for joe average pc user.

FUCK HST DEAD (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11734244)

there is TRULY nothing to fucking live for now.

FUCK me lightly with a chainsaw.

FUCK FUCK FUCK FUCK FUCK FUCK

Mi8us 1, troll) (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11734247)

So finally! (5, Funny)

Trogre (513942) | more than 9 years ago | (#11734252)

buying Intel really will make the internet go faster!

Open Source Drivers (1)

BarrettVS (801425) | more than 9 years ago | (#11734275)

But will the technical details of this be available for OSS or will it be like OpenBSD's experience with Intel's cryptographic hardware? [openbsd.org]

Old news (4, Informative)

obeythefist (719316) | more than 9 years ago | (#11734281)

Intel has been wanting to do this for years! I remember reading old articles on The Register about it, and how they were pulling back because Microsoft didn't like the idea of Intel taking away things that Microsoft were running with their software, including things like managing networking instead of having the OS do it.

Of course it couldn't last, what with nVidia doing firewalls and NICs and all sorts of other things, Intel is a big company and they know when they need to compete. MS has also lost a bit of their clout when it comes to things like pressuring the bigger companies (intel, HP, Dell)

cpu? e-net controller? (0)

Anonymous Coward | more than 9 years ago | (#11734298)

which allows the CPU, the mobo chipset and the ethernet controller to help deal with TCP/IP overhead

As opposed to right now, where all that TCP/IP stuff is handled by the floppy drive and the mouse?

If the point isn't obvious now, I'm trying to say the CPU, the motherboard chipset, and the ethernet controller were already intimately involved in the whole network stack thing.

Re:cpu? e-net controller? (2, Funny)

mabinogi (74033) | more than 9 years ago | (#11734327)

didn't you know?

The secret to faster downloads is to keep wiggling the mouse, that way it pushes the data through faster.

Re:cpu? e-net controller? (1)

RyuuzakiTetsuya (195424) | more than 9 years ago | (#11734394)

Troll but, i'll bite.

I said, TCP/IP data. Typically, the ethernet controller, mobo chipset, and cpu don't care what kind of data it's processing, just that it's processing data. Now it'll be sensitive to TCP/IP overhead and have special ways to process it.

Re:cpu? e-net controller? (0)

Anonymous Coward | more than 9 years ago | (#11734396)

And the most that your average ethernet controller does in hardware is what? TCP checksumming? Oh, thanks Mr. Controller, that helps a lot.

if i were to make wildly unsubstatiated guesses... (2, Interesting)

evilmousse (798341) | more than 9 years ago | (#11734336)


i'd guess the tcp/ip stack implementations available to intel are pretty solid. still, i'd hope it'd be flashable just in case. i can imagine only once in a blue moon would you find someone with libpcap and the patience to find holes in some of the most trusted code in the net.

Re:if i were to make wildly unsubstatiated guesses (0)

Anonymous Coward | more than 9 years ago | (#11734402)

I know you're probably on to something, but really, I have no idea what you're talking about... TCP/IP stack in flash memory? Huh?

Most people define Acryonyms second (0, Redundant)

PickyH3D (680158) | more than 9 years ago | (#11734374)

I/OAT, or I/O Acceleration Technology
Should be
I/O Acceleration Technology, or I/OAT
It only makes sense.

It's like programming with a variable that has yet to be defined.

Re:Most people define Acryonyms second (0, Offtopic)

RyuuzakiTetsuya (195424) | more than 9 years ago | (#11734406)

when I was actively doing programming, I got into the bad habit of doing:

int x = 0;

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>