×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

FastTCP Commercialized Into An FTP Appliance

Zonk posted more than 6 years ago | from the speedy-delivery dept.

Communications 156

prostoalex writes "FastTCP technology, developed by researchers at CalTech, is being commercialized. A company called FastSoft has introduced a hardware appliance that delivers 15x-20x faster FTP transmissions than those delivered via regular TCP. Says eWeek: 'The algorithm implemented in the Aria appliance senses congestion by continuously measuring the round-trip time for the TCP acknowledgment and then monitoring how that measurement changes from moment to moment.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

156 comments

After more than 14 years... (1)

Anonymous Coward | more than 6 years ago | (#19708595)

Of several research in the field. The problem is, still, microsoft being able to support it. If they crossed that bridge already, then it's great.

Mainly, SCTP, XCP, etc. are all great protocols, but if nobody supports them. In particular MS, well, it's not going to work in the long term.

Re:After more than 14 years... (2, Interesting)

Idbar (1034346) | more than 6 years ago | (#19708771)

They should, in fact, start supporting ECN instead of changing TCP. Implementing ECN altogether with a well tuned AQM can reduce packet losses and difference packet losses due to congestion from those due to medium (now more important with so many wireless networks). Then, they can start looking for TCP versions that can react to congestion in a different way to the way they react to packet losses.

But that's just my thought.

Re:After more than 14 years... (1)

KPU (118762) | more than 6 years ago | (#19710717)

If, for whatever bizarre reason, you have a Windows server able to serve quickly, Fastsoft sells a BSD-based proxy product intended to sit between it and the Internet.

15-20 x faster ? i call bs (0)

Anonymous Coward | more than 6 years ago | (#19708597)

saying transfers will go 15-20 times faster makes me think it'll go 14-19 times line-speed, sounds like stupid marketing bs

Frosty (-1, Troll)

Anonymous Coward | more than 6 years ago | (#19708599)

Piss

Re:Frosty (-1, Troll)

Anonymous Coward | more than 6 years ago | (#19708663)

Drink it, bitch. Oh yeah. You know you want to.

--
You are invited to drink my Frosty Piss [slashdot.org] .

Ok. (0)

Anonymous Coward | more than 6 years ago | (#19708623)

Ya... but what about FastSCP?

Re:Ok. (2, Insightful)

aliquis (678370) | more than 6 years ago | (#19708921)

It was the speed of TCP which would improve, not only FTP, I guess, so therefor whatever else uses TCP should get faster aswell, shouldn't it?

hmm (3, Interesting)

biscon (942763) | more than 6 years ago | (#19708627)

Wouldn't you need to have FastTCP routers throughout the route in order to reach the speed mentioned?
and if so why bother using the old FTP protocol instead of just making a new and more modern protocol?

Re:hmm (4, Informative)

GWLlosa (800011) | more than 6 years ago | (#19708729)

No, basically its just an optimization of packet rerouting and timing in hardware, instead of software. So its the same 'old' protocol, but with bits of it implemented in chips for speed, specifically the 'hey should I reroute now and it is ok to send a packet right now' bits.

Re:hmm (1)

Idbar (1034346) | more than 6 years ago | (#19708793)

In fact, although many routers and switches have level 4 (transport) capabilities those are still level 3 (routing) and level 2 (data link) equipment, and that should be transparent to them.

Re:hmm (1)

acil (916155) | more than 6 years ago | (#19708821)

Exactly. The only reason most routers even touch the layer 4 header for is to look at port numbers for blocking purposes. This is a change in the end to end communication, and how the end devices detect and limit their bandwidth usage in a single "conversation".

Re:hmm (1)

profplump (309017) | more than 6 years ago | (#19708813)

Most routers -- particularly those on the public Internet -- route only at the IP level and not at the TCP level. As such they would not notice anything different about these packets.

Re:hmm (5, Informative)

stud9920 (236753) | more than 6 years ago | (#19708829)

No. TCP is end to end, the nodes in between could not care less (except for dubious filtering purposes) what layer 4 protocol is piggybacking upon IP proper.

Re:hmm (1)

biscon (942763) | more than 6 years ago | (#19709579)

yes you are absolutely correct which I realized shortly after having submitted my comment.

Wish slashdot had an edit option but I guess it will teach me to think before I type.

Re:hmm (2)

multipartmixed (163409) | more than 6 years ago | (#19710383)

No.

Al Gore didn't invent the Transmission Control, he invented the Internet.

Hence, data centers today mostly use Internet Protocol routers.

All Hail Al!

Re:hmm (1)

KPU (118762) | more than 6 years ago | (#19710655)

Routers need not be changed to support FastTCP. The IP-level packets are unchanged. It simply a change in congestion control (i.e. rate at which packets are sent) which is done at the sender.

FTP was mentioned in the article as an example. Any TCP-based protocol can use the box. All the box does is change the congestion control on packets passing through it.

No Way (3, Insightful)

hardburn (141468) | more than 6 years ago | (#19708661)

Regular TCP can't be more than an order of magnitude away from the Shannon Limit, can it?

Re:No Way (5, Interesting)

maharg (182366) | more than 6 years ago | (#19708835)

The problem is that "regular" TCP mis-interprets long Round-Trip-Time (aka latency) as link congestion and backs off the rate at which it is sending packets.

The bandwidth between point A and B may be rated at a high throughput, but TCP protocols such as FTP will never achieve that speed if the RTT is long. Increasing the bandwidth won't help !! So a slowdown of 20-30x is not uncommon on WAN links with high latency e.g. transcontinental, via satellite.

I've looked at technologies like Digital Fountain (and it's Java implementation, FileCatalyst) which use UDP and some clever mathematics to overcome latency, however it's not clear from TFA what FastTCP is doing underneath..

Re:No Way (4, Interesting)

maharg (182366) | more than 6 years ago | (#19708985)

.. although I keep coming back to the sentence "...senses congestion by continuously measuring the round-trip time for the TCP acknowledgment and then monitoring how that measurement changes from moment to moment.".

I would imagine in the typical high-latency scenario, where regular TCP is mis-interpreting long RTT as link congestion, and backing off the rate, FastTCP is able to actually keep pushing the rate up, meanwhile keeping an eye on the RTT. I mean, the RTT shouldn't increase in line with the rate, unless the link actually *is* congested. So just increase the rate until the RTT increases, at which point you are genuinely maxxing out the link. I think that must be how it is working..

Re:No Way (1, Informative)

Eivind (15695) | more than 6 years ago | (#19709319)

Uhm, let me guess, your knowledge of TCP is based on Trumpet Winsock for Windows 3.11 ?

Modern tcp-stacks most certainly scale the window and certainly don't "mis-interpret" high latency as congestion. (they do however interpret high packet-loss as congestion, which is a reasonable guesstimate most of the time, but *DOES* break down on links that, for example, have a constant packet-loss of a few percent (regardless of traffic-levels)

Re:No Way (1)

maharg (182366) | more than 6 years ago | (#19709659)

funnily enough, no. so why exactly does a high latency low packet loss network slow down TCP based protocols so much ? Or are you saying that this doesn't happen ? I'm genuinely interested in your answer.

Re:No Way (4, Interesting)

Andy Dodd (701) | more than 6 years ago | (#19710351)

It doesn't, unless your TCP implementation is from the stone age.

I love how fastsoft likes to compare themselves to Reno. 4.3BSD "Reno" was released in 1990, and the classic Reno implementation is LONG obsolete (and does indeed suck on a wide variety of connections).

I can see how it would be quite easy to achieve 10-20 times the throughput of Reno on a high-loss or high-latency connection, in fact a stock untuned Linux stack will do so in many situations. (For example, a few months ago I was doing TCP throughput tests dealing with some faulty hardware that liked to drop bursts of packets due to a shitty network driver. A machine running VxWorks 5.4, which is pretty much vanilla Reno, could only send 160 kilobytes/second over a 100Base-T LAN to that machine due to the packet loss making it throttle back. An untuned laptop with Linux 2.6.20 managed 1.7 megabytes/second over the same connection to the same destination.)

High latency connections were a major problem for TCP prior to RFC 1323 were a problem, but TCP stack authors have had 15 years to implement RFC 1323.

FastSoft's product may have been big news in the early 1990s, but if a company has to resort to making performance comparisons against the "Reno" TCP implementation, they're a snake oil salesman because Reno is such an obsolete and shitty TCP congestion control implementation.

Re:No Way (2, Insightful)

644bd346996 (1012333) | more than 6 years ago | (#19710653)

Windows is still highly sensitive to high latency. Try running a bandwidth test with a nearby server and one across an ocean. You'll notice a much bigger difference with Windows than with a stock modern UNIX, which can still be tweaked quite a bit.

Re:No Way (3, Funny)

Stellian (673475) | more than 6 years ago | (#19708943)

Shannon-shmannon. How dare you !
If you've read TFA you'll know this revolutionary technology not only increases the speed by a factor of 15 to 20 times, but also insures "overall client happiness". Amazing !

Porn of course! (0)

Anonymous Coward | more than 6 years ago | (#19709871)

Of course they'll be happy!

They'll be getting their porn 20x faster!

Re:No Way (1)

jollyreaper (513215) | more than 6 years ago | (#19710229)

Shannon-shmannon. How dare you !
If you've read TFA you'll know this revolutionary technology not only increases the speed by a factor of 15 to 20 times, but also insures "overall client happiness". Amazing !
That's it, I'm putting another blade in the server...and another aloe strip. Beat that, asshole. :)

Re:No Way (0, Redundant)

jumpingfred (244629) | more than 6 years ago | (#19708975)

TCP does not have anything at all to do with the Shannon Limit. The Shannon limit lets you know how fast it is possible to send the bits with a given error rate over a finite bandwidth channel in the presence of noise. It does not have anything to say about higher level protocols.

Re:No Way (2, Interesting)

PDAllen (709106) | more than 6 years ago | (#19710659)

Basic TCP simply ramps up the transmission rate linearly until it starts dropping packets (timeout for receiver acknowledgement), then it halves the rate and begins to ramp up again. So that means that if there is a decent amount of capacity (i.e. the receiver can ack the packets in time) then you expect to get at least half the speed the data protocol allows (this too isn't perfect but again it's not too far from Shannon). There are fiddles to deal with low capacity channels, which are pretty standard. There are a few tricks (called 'slow start'!) to get a decent transmission rate quickly from the off (if you have a 10M connection and you use the original TCP protocol you'll spend the first couple of minutes of transmission just getting up to pace). Still not very good if you have a really fast connection for big files (say a few TB of astronomical data) when you spend 20 minutes building the rate up to the channel limit only to see it reset to half.

So, there are problems. One is that if a channel gets overloaded and starts dropping packets then probably there are several TCP links going through it, most of them lose packets, drop to half rate and the link is suddenly at about 60% capacity. Another is that if you have time-critical data (videoconferencing, say) there isn't any way to protect capacity - so your videoconference gets freezes which annoy everyone because some PFY's P2P traffic is filling the channel, even though the PFY couldn't care less if his porn takes five or ten minutes to download.

There are also things you can do - for example there is nothing to stop you fiddling your implementation of TCP to only drop to 90% on a packet loss; do that and you'll get about 40% better upload speed (obviously it'll do nothing to download speed) if you're on a reasonably direct backbone connection (i.e. not a T1 or cable or whatever). But that's antisocial, and if you send enough data for people to notice then you will be very unpopular (you'll be causing far more channel overloads, and everyone else's data rates will take a big hit).

Ultimately, though, even if you cannot in theory do better than get 2x performance over TCP (it's probably a bit less than 2, I'd guess) you're still going to find it cheaper to get 1% more performance out of TCP (which certainly is possible) than to lay another 1% of fibre.

guessing but (0)

Anonymous Coward | more than 6 years ago | (#19708687)

i think it's not altering the pipes / tubes =} ?.
the pipe stays the same (TCP/IP).
it's just a tech to fill the pipe more smartly.

on the senders end, normal tcp half's the bandwidth once
it detects a dropped packet and then tries to go faster
and faster until, again a packet is dropped. rinse repeat ...

my guess, this new tech is preempting the ceiling, and
trying to stay below it (ceiling being packet
gets dropped somewhere).

doesn't sound very difficult to do ... if it works like that.

FastTCP is just a fancy name for TCP Vegas? (5, Informative)

Anonymous Coward | more than 6 years ago | (#19708701)

FastTCP sounds like a fancy name for TCP Vegas (which has been around for quite some time). Window scaling and Vegas should buy you pretty much everything that FastTCP seems to be offering... Sounds like marketspeak to me.

Re:FastTCP is just a fancy name for TCP Vegas? (3, Informative)

bach (4295) | more than 6 years ago | (#19709257)

These slides http://www.fastsoft.com/downloads/Optimizing_TCP_P rotocols.ppt [fastsoft.com] refer to TCP FAST as a faster version of Vegas.

Re:FastTCP is just a fancy name for TCP Vegas? (1)

nanosquid (1074949) | more than 6 years ago | (#19710089)

Note that the company only claims a 15-20x improvement for a customer (who may have been using a really bad TCP stack), not relative to TCP FAST.

Re:FastTCP is just a fancy name for TCP Vegas? (1)

jrumney (197329) | more than 6 years ago | (#19710085)

TCP Vegas sounds like quite a fancy name itself. FastTCP is far more appropriate IMHO, so if it is merely a name change, it is for the better.

Re:FastTCP is just a fancy name for TCP Vegas? (1)

Andy Dodd (701) | more than 6 years ago | (#19710375)

Not really that fancy - the 1990 4.3BSD release was codenamed "Reno" and its TCP stack was widely copied into many other OSes due to its permissive license. Even though I believe that release of BSD as a whole was considered "Reno" (in the same manner as "Feisty Fawn" for Ubuntu or "Zod" for Fedora), in general Reno is now used to refer to its TCP stack and/or TCP implementations that behave the same as Reno's stack. i.e. nearly every OS in the mid-1990s used "TCP Reno".

Reno is so obsolete that you can't even choose it as one of the many pluggable congestion control algorithms in recent Linux kernels.

Re:FastTCP is just a fancy name for TCP Vegas? (1)

beyondkaoru (1008447) | more than 6 years ago | (#19710467)

well considering that 'vegas' is both speed neutral and matches the name of the 'usual' tcp congestion control algorithm (tcp reno), and there seems to be a tradition of naming new tcp ideas after cities (say, tcp westwood), i think 'fast' is more inappropriate. fast tcp is a backronym that, well, might be obsoleted some day should we switch to something faster... which will be awkward to name if we keep calling things with faster adjectives... should we name things 'swift tcp', 'agile tcp', 'hypersonic tcp'? those all sound kind of silly. as technology advances you can easily get into some sort of arms race with naming things based on their quality. fast tcp might be faster, but it is still a relatively inappropriate name, in my opinion.

Re:FastTCP is just a fancy name for TCP Vegas? (5, Funny)

jollyreaper (513215) | more than 6 years ago | (#19710255)

FastTCP sounds like a fancy name for TCP Vegas (which has been around for quite some time). Window scaling and Vegas should buy you pretty much everything that FastTCP seems to be offering... Sounds like marketspeak to me.
You wouldn't want to use TCP Vegas, the packets are unroutable. What happens in TCP Vegas stays in TCP Vegas.

Powered by handwavium (2, Informative)

Rix (54095) | more than 6 years ago | (#19708715)

The same amazing material that makes these [wikipedia.org] so fast!

Re:Powered by handwavium (4, Informative)

bockelboy (824282) | more than 6 years ago | (#19708877)

Actually, FAST TCP is also available as a linux kernel patch. It's a well-tuned Caltech product which has been in development for years:

http://netlab.caltech.edu/FAST/ [caltech.edu]

Several highlights include:
- Caltech held the world record for data transfer for awhile
- Won the bandwidth challenge at SC05

It's one of the best ways to tune a single TCP stream. Finally, the list of about 50 TCP-related publications should indicate this isn't handwavium:

http://netlab.caltech.edu/FAST/fastpub.html [caltech.edu]

Traditional TCP streams (such as what you get with FTP) top out around 10-20 Mbps. If you want to see a single stream go a couple hundred Mbps, you need TCP tweaks like FAST (however, FAST is one of many competing TCP "fixes").

Re:Powered by handwavium (1)

rduke15 (721841) | more than 6 years ago | (#19709873)

Traditional TCP streams (such as what you get with FTP) top out around 10-20 Mbps.

I have recently observed 50-60 MBYTESps on a Gigabit LAN, between a vanilla Linux FTP server and a Windows client. And that was about the hard disk read limit on the server. Didn't look like a "traditional TCP stream" limit at all. It was a 300 MB. file, filled with random bytes. If I remember correctly, I didn't even enable jumbo frames, because one of the cards couldn't do it.

Re:Powered by handwavium (1)

Andy Dodd (701) | more than 6 years ago | (#19710395)

"vanilla Linux" isn't a traditional TCP implementation for any reasonably recent kernel. In fact, if one assumes by "traditional TCP" the grandparent meant Reno, then that particular implementation is so obsolete you cannot even choose it as a congestion control algorithm for Linux any more. (Linux allows you to choose between 6-10 pluggable congestion control algorithms, the recent defaults of BIC and later CUBIC are both very nice ones.)

Re:Powered by handwavium (1)

superanonman (1116871) | more than 6 years ago | (#19708999)

They're not fast, but they are awesome and I have noticed a better connection when using these to bridge my Xbox Live connection through my comp.

Does it speed up ftp or TCP/IP (1)

the_womble (580291) | more than 6 years ago | (#19708735)

Does this speed up FTP or TCP?

If the latter can it speed up other protocols?

Re:Does it speed up ftp or TCP/IP (0)

Anonymous Coward | more than 6 years ago | (#19709309)

Any protocol implemented on top of TCP can benefit. This means protocols like HTTP and FTP are primed.

Re:Does it speed up ftp or TCP/IP (1)

grub (11606) | more than 6 years ago | (#19709327)


It's supposed to speed up TCP. Ergo, because FTP rides on TCP it should be faster.

Hype (3, Informative)

Zarhan (415465) | more than 6 years ago | (#19708785)

Sounds like they just skip TCP slow start algorithm and stuff like that - so it's probably not faster than regular TCP after the window has stabilized. Slow-start and backoff algorithms of course cause slowdowns.

Other possibility is some sort of header compression.

Anyway, to use this safely you'd need to be *sure* you know your link charasteristics. The reason TCP has the slow-start mechanisms in the first place is to make sure you don't overflow the link - that's why it's known as flow control :)

Re:Hype (2, Informative)

Zarhan (415465) | more than 6 years ago | (#19709127)

Oh, after reading other comments, I guess they really are going for solving the high-bandwidth high-latency link problems. I didn't even consider that to be necessary since I thought that was pretty much solved and as such, "old news".

ftp://ftp.rfc-editor.org/in-notes/rfc3649.txt [rfc-editor.org]

ftp://ftp.rfc-editor.org/in-notes/rfc3742.txt [rfc-editor.org]

I guess this device works as some sort of wrapper so that legacy TCP implementations don't get slowdowns, but doesn't strike as anything revolutionary to me - the RFCs are from year 2003.

Nonsense (1)

gweihir (88907) | more than 6 years ago | (#19708817)

Typical FTP connections get 80% or so of available bandwidth. 15-20x faster is not possible. Maybe 1.2x if you re lucky.

Re:Nonsense (3, Interesting)

MindStalker (22827) | more than 6 years ago | (#19708951)

Only for short lengths. For international or especially satellite connections you get less than 10% with normal TCP.

Re:Nonsense (1)

gweihir (88907) | more than 6 years ago | (#19708995)

Which still does not matter, since yous last mile is typically a lot slower than the international connection. Anyways, I get far more than 50% of my last mile (1MBps) routinely with FTP.

Re:Nonsense (1)

maharg (182366) | more than 6 years ago | (#19709169)

.. unless the last mile is vertically down from the satellite (at both ends of the link) ;o)

Re:Nonsense (4, Insightful)

bockelboy (824282) | more than 6 years ago | (#19709011)

Think again. I suspect that you only have tried that on a low-speed link (DSL, Cable, FIOS, etc). Try thinking about 2 orders of magnitude faster.

I transfer about 20 TB / day at work, and that wouldn't be possible with a "typical FTP connection".

If you read the papers coming out of Caltech, you'd see they were optimizing for 10 Gbps lines, not residential lines. 15-20x faster is a very fair estimate; look at Caltech's presentations at SC05 or SC07.

Re:Nonsense (2, Insightful)

Anonymous Coward | more than 6 years ago | (#19709571)

I dunno, at a previous company I built a FTP gizmo based on Windows 2k which could saturate a 1000 mile long OC3 line. That's 1.4 TBytes a day.

To transfer 20 TB/day, you need something like 1.8 Gbps sustained, not my measly 155 MBps, but that's only (only!) an order of magnitude better. TCP has shown itself quite comfortable scaling up from 300 baud modems to GigE links (6+ orders) so what's one more among friends? This is not to say TCP can't be improved: I've always thought using dropped packets to measure congestion was a but hokey, but it seems to work fine. If the fine researchers at CalTech think they can do better by measuring RTT, that sounds just great.

Re:Nonsense (3, Interesting)

bockelboy (824282) | more than 6 years ago | (#19710195)

It works fine, but we actually tend to lean toward many streams as opposed to uber-fast single streams.

Truthfully, you have to tweak the system pretty hard to get decent performance over a single stream (for us, 155 Mbps isn't sufficient - I work on a LHC project), especially from Nebraska to Switzerland (CERN). FAST TCP helps out a whole lot. GridFTP is the other piece of the equation - it is basically FTP with multiple data streams.

We tend to lean on hundreds of streams a whole lot more than tweaking TCP settings, and the Caltech guys give us heck for that. They're right, however - if you're getting 100s of KBps per stream to some European site, it just takes a ridiculous number of streams to get up to 100 MBps. Right now, the storage systems are behind the network, so we haven't even been able to start playing with FAST TCP yet.

http://cmsdoc.cern.ch/cms/aprom/phedex/prod/Activi ty::RatePlots?graph=quantity&entity=dest&src_filte r=&dest_filter=Nebraska&no_mss=true&period=l14d&up to=&.submit=Update [cmsdoc.cern.ch]

Re:Nonsense (1)

piggydoggy (804252) | more than 6 years ago | (#19709705)

I transfer about 20 TB / day at work

Does your employer know you are?

Re:Nonsense (1)

bockelboy (824282) | more than 6 years ago | (#19710139)

Yup,

http://cmsdoc.cern.ch/cms/aprom/phedex/prod/Activi ty::RatePlots?graph=quantity&entity=dest&src_filte r=&dest_filter=Nebraska&no_mss=true&period=l14d&up to=&.submit=Update [cmsdoc.cern.ch]

In fact, we often talk with the Caltech folk about deploying FAST TCP; the problem is that both ends need to deploy the kernel patches. Truthfully, the limiting factor becomes the disk systems, not the network. When we start to push closer to 10 Gbps instead of 4-6 Gbps, we'll need to make smarter decisions about the TCP stacks.

Re:Nonsense (0)

Anonymous Coward | more than 6 years ago | (#19710411)

So, will any of this give me transfer speeds greater than 3KB/s on my 28.8Kbps dial-up connection?

I highly doubt it. The modulation of digital data for transmission as analog signals over phone lines is the rate limiting step for much of the world.

I will never see coax or fiber on my road, nor will I ever see DSL. It's just not profitable for anyone to do it, it will never happen.

What we need is a reinvention of the analog modem that achieves superior speeds over phone lines within the voltage limits set out by the FCC, or get the FCC to review those limits and increase them if possible.

And don't say satellite or wireless... I want a reliable connection.

New and alternative protocols mean nothing to me. Tell me about a new take on the analog modem and I might get excited.

To gain that much speed... (1)

Aladrin (926209) | more than 6 years ago | (#19708833)

To gain that much speed, your network must be really fscked up. I can max out my 7mbps line on any FTP that has the bandwidth available. I've heard of people lines much much bigger than mine that max theirs our regularly, also. I'm not talking about short hops, either... I mean international.

The only way I could see this as being possible is if there is so much latency that it basically makes the TCP protocol think every packet is lost, and resends them... 20 times. If you are seriously on a network that is that messed up, you need to just find another network. Some silly little piece of hardware is -not- going to solve your problems.

If they had said 1.5x to 2.0x... I could believe them. It's not that hard to find network conditions that slow things that much. But 15x to 20x? No way.

Re:To gain that much speed... (1)

maharg (182366) | more than 6 years ago | (#19708905)

Not so. High (>1500ms) latency *severely* affects TCP protocols like FTP. I encounter this on trans-continental WANs which go over satellite every day. I've tried some UDP compression such as FileCatalyst, and 15-20x speedup is possible on some links.

Re:To gain that much speed... (1)

bockelboy (824282) | more than 6 years ago | (#19708977)

Hi -

You're thinking way too small. FAST TCP was designed with 10 Gbps links in mind - i.e., Internet2 type applications. FAST TCP streams are able to achieve several hundred Mbps. FTP streams over TCP Reno usually max out on something relatively pathetic, like 10-20 Mbps.

Caltech's SC07 presentation showed commodity servers which could transfer 2 Gbps end-to-end using their FDT tool (Java based, actually). The servers had 4 HDDs, dual Gigabit ethernet conncetions, and ran a Linux 2.6 kernel with the FAST TCP patches.

On the other hand, GridFTP takes a different approach - parallelizing several TCP streams at a time. Why get a single stream going 10x faster using a special Linux kernel when you just send 10 parallel TCP streams at once? (While GridFTP over TCP with lots of streams is more popular than FAST TCP with GridFTP using a small number of streams, imagine what happens to your RAID array when there is a large (>100) number of streams of data coming off different parts of the same disk...)

Re:To gain that much speed... (2, Informative)

Aladrin (926209) | more than 6 years ago | (#19709067)

RTFA.

"The Aria 2000, which is due in July, supports 1G-bps links. Existing Aria appliances support 10M-bps links, 50M-bps links and 200M-bps links."

10gbps my ass. The one they haven't released only does a tenth of that. And the smallest of their products barely handles my home cable line.

For what it's worth, my initial thought was that they must be targetted truly massive lines and that it would be a lot harder to truly use those. Too bad it wasn't true.

Re:To gain that much speed... (1)

xotakuotaconx (1085759) | more than 6 years ago | (#19709017)

I believe the article states 15-20%, not 15x-20x. That is a big difference.

Re:To gain that much speed... (1)

Aladrin (926209) | more than 6 years ago | (#19709037)

You should probably bother to RTFA then.

"Beta testers at The Post Group, a film post-production facility in Hollywood, Calif., found that the Aria delivered 15 to 20 times faster transmissions "and better overall client happiness," said CIO Darin Harris."

Re:To gain that much speed... (1)

g0dsp33d (849253) | more than 6 years ago | (#19709237)

Meh, whats a few powers of ten?? Your like those buggy folk at NASA that care what measurement system people are using.

For single connection use of big pipes (1)

Animats (122034) | more than 6 years ago | (#19708881)

This is for single-connection use of wide-bandwidth channels with long latency. If you're synchronizing two servers across a considerable distance and have more than 1Gb/s or so available, it might be useful. For anything less, don't bother.

For local connections, you don't have many packets in flight, so you don't need this. For slower connections, you don't have the bandwidth to get that useful many packets in flight, so it doesn't help there either. It's not going to help your web browsing.

HOW much speedup? (1, Insightful)

Have Blue (616) | more than 6 years ago | (#19708887)

An FTP session running over a 100Mbit LAN should see about 10MB/sec real data transfer, maxing out the line and accounting for overhead. They're claiming that their gadgets could move a file between each other at 150 megabytes per second over the same cable?

As the saying goes, this requires some very extraordinary evidence. Or there are a lot of missing qualifiers like "over a specific worst-case line that TCP doesn't come close to theoretical maximum performance on".

Re:HOW much speedup? (0)

Anonymous Coward | more than 6 years ago | (#19708941)

Perhaps they have too much of Steve Jobs' RDF generator. And they are now claiming the same "revolution" on TCP. So... the new hype is:

yeah... but does the iPhone support it?

Re:HOW much speedup? (1)

Dacelo Gigas (1077179) | more than 6 years ago | (#19709103)

An FTP session running over a 100Mbit LAN should see about 10MB/sec real data transfer, maxing out the line and accounting for overhead. They're claiming that their gadgets could move a file between each other at 150 megabytes per second over the same cable? As the saying goes, this requires some very extraordinary evidence. Or there are a lot of missing qualifiers like "over a specific worst-case line that TCP doesn't come close to theoretical maximum performance on".

Note the last line of the story for the real reason :

The Aria 2000, which is due in July, supports 1G-bps links. Existing Aria appliances support 10M-bps links, 50M-bps links and 200M-bps links.

Their previous products supported a max link speeds from 50Mbsp to 200Mbps, but the new one supports 1000Mbps. 20 * 50 = 1000. If you upgraded from their lowest end product, to their new one, you may well achieve a 20x improvement. It seems this is just a product upgrade with over-eager marketing claims. Imagine that.

Dacelo Gigas

Re:HOW much speedup? (1)

Wesley Felter (138342) | more than 6 years ago | (#19709171)

Or there are a lot of missing qualifiers like "over a specific worst-case line that TCP doesn't come close to theoretical maximum performance on".

Yes, this is what FAST TCP [caltech.edu] is designed for.

Re:HOW much speedup? (1)

TubeSteak (669689) | more than 6 years ago | (#19709181)

An FTP session running over a 100Mbit LAN
Oh, you didn't RTFA.
No wonder you're confused.

The *first* sentance of TFA tells you everything you need to know:
This application is for WANs

http://en.wikipedia.org/wiki/Wide_area_network [wikipedia.org]
"The largest and most well-known example of a WAN is the Internet."

Now don't you wish you had skimmed TFA?
I expect better from a 3-digit UID

Re:HOW much speedup? (1)

DRJlaw (946416) | more than 6 years ago | (#19709335)

The Aria is designed primarily to optimize large file transmissions "over long distances through large pipes," Henderson said. The Aria 2000, which is due in July, supports 1G-bps links. Existing Aria appliances support 10M-bps links, 50M-bps links and 200M-bps links.

An FTP session running over a 100Mbit LAN should see about 10MB/sec real data transfer, maxing out the line and accounting for overhead They're claiming that their gadgets could move a file between each other at 150 megabytes per second over the same cable?

No. They're claiming that their gadgets could move a file over a long distance 200Mbps link at 10-15x faster than FTP. You're claiming that you could get 10MB/sec over the same link. It's your claim that requires evidence.

FastTCP == 4 LoCs per hour (4, Funny)

maharg (182366) | more than 6 years ago | (#19708933)

Yes, you read that right - 4 Libraries of Congress per hour !!!!

See http://www.fastsoft.com/research.html [fastsoft.com]

Re:FastTCP == 4 LoCs per hour (1)

jollyreaper (513215) | more than 6 years ago | (#19710275)

Yes, you read that right - 4 Libraries of Congress per hour !!!!
And once the talibaptists and theocons finish removing the objectionable material, 50 Libraries of Congress per hour!

So where's the SlowTCP? (1)

bhmit1 (2270) | more than 6 years ago | (#19709069)

If FastTCP is great for speeding things up over high latency links, what is there for slowing down connections? Particularly when you only want to take the remaining bandwidth and not impact users. I've seen various products that do this, but they never describe how it's done. Is it sufficient to slow down the connection when you see latency increase, or are there better algorithms?

Re:So where's the SlowTCP? (2, Informative)

Andy Dodd (701) | more than 6 years ago | (#19710457)

QoS - typically implemented not in the TCP stack but in intermediary routers that prioritize packets (important stuff goes out first and is less likely to be dropped if the connection is saturated, "bulk" data like BitTorrent goes out only if the send queue is empty at the router's WAN connection and is most likely to get dropped if a queue fills up), and in some cases artifically throttle the connection by dropping packets if the sender transmits beyond a set limit.

If you're looking for QoS in a home environment, the easiest solution is likely to replace your router with one capable of running DD-WRT. (This assumes you have a "consumer grade" router. If your gateway to the outside world is a normal PC running Linux, it's just a matter of setting up QoS on that box... Quite a few HOWTOs exist for this.)

impossible to know if real from site (3, Insightful)

sentientbrendan (316150) | more than 6 years ago | (#19709101)

It's true that early implementations of TCP were very naive. Over time this has been fixed, but there are still a number of problems remaining, especially to do with packet loss on WIFI networks (which it sounds like this may address).

The primary problem with WIFI networks is that they naturally have a lot more packet loss than normal links. On other links, a lot of packet loss tends to indicate packet congestion, so TCP likes to decrease throughput to try to solve it. Under WIFI, that's of course unnecessary and won't solve the underlying problem.

The article is missing some important technical details and there's a little too much marketing speak, but it does clearly sound like an improved TCP implementation, and probably some kind of traffic shaping hardware on one end (so that they don't have to change the networking stack on linux and windows, patch all their machines, etc).

There were a couple of other posters that suggested that such a thing wouldn't work. One guy even suggested that it would require different routers end to end! This is of course nonsense.

1. TCP != IP. Routers don't have to know anything about TCP to work (although they generally do for NAT, ACL, and traffic shaping purposes).
2. TCP implementations have been changed a number of times in the past. Changing the implementation is not the same as changing the protocol. Nothing else on the network cares what TCP implementation you are using as long as you speak the same protocol.

Re:impossible to know if real from site (1)

KPU (118762) | more than 6 years ago | (#19710611)

FastTCP uses latency-based congestion control. The theory is that by comparing current and minimum round-trip time, one can deduce the router queue sizes and control congestion based solely on round trip time. Since loss is not a signal, FastTCP performs far better than BIC on high-loss networks.

Why Not UDP (2, Funny)

maz2331 (1104901) | more than 6 years ago | (#19709129)

Maybe they should just use good old UDP instead and implement a tweak to the FTP protocol to handle retransmit and error checking. The 'Net doesn't drop very many packets anymore, and UDP can work just fine.

Those who fail to understand TCP.. (2, Informative)

Junta (36770) | more than 6 years ago | (#19709311)

Are doomed to reinvent it, poorly, to paraphrase a well known saying. I have to roll my eyes everytime I see someone recommend the use of UDP in a circumstance where the application will not tolerate data loss. In gaming and media streaming, UDP can make sense, where the receiver can gloss over the details and do something reasonable, to an extent possible interpolating the missing data or simply showing a corrupted block or having someone skip a little in an online game. The only places where I see UDP implemented in a context where packet loss without retry is tolerated is in traditionally embedded applications (i.e. service processors with IPMI, and TFTP in ethernet boot roms). Both of these protocols can start behaving very badly if packets are lost or if over a high-latency link.

TCP is the most researched most tweaked and most examined reliable transport protocol, and trying to reinvent your own over an unreliable protocol is asking for trouble.

Re:Those who fail to understand TCP.. (1)

ShakaUVM (157947) | more than 6 years ago | (#19709923)

If you know what you're doing and know your application, you can build a better UDP based transport layer than what you get with TCP.

Re:Those who fail to understand TCP.. (1)

PDAllen (709106) | more than 6 years ago | (#19710735)

Yes - but essentially this is because TCP includes a bunch of be-nice-to-everyone stuff. It doesn't try just to optimise your personal connection, it tries to send your data in a way that will not screw over everyone else who uses the same channel.

If you just want to transfer your data as fast as possible, change a couple of parameters in your TCP implementation so it ramps up faster and drops to maybe 95% instead of all the way to 50% when it gets packet loss. That'll work about as well as completely doing your own thing with UDP. But, you will have no friends because all the channels you send data over will end up carrying your data and no-one else's (you overload the channel, it drops many packets, your rate drops to 95% but everyone else's drops to 50%, then you ramp up quicker than they do and grab most of the bandwidth they used to have, rinse and repeat).

Re:Those who fail to understand TCP.. (1)

petermgreen (876956) | more than 6 years ago | (#19709933)

TCP is a generalist. This is often good it means its well tested but can also mean its not well suited to a specific case. It is also implemented in the network stack. Again this can be a good thing (only one copy of code) but it can also be a bad thing because it means you need to modify the OS if you want to tweak any part of it.

One interesting possibility would be to do TCP over UDP. This would allow you to use the latest TCP tweaks without the need to modify the OS.

TCP is underestimated... (2, Informative)

Junta (36770) | more than 6 years ago | (#19710183)

I've seen arguments used where people say 'we don't need to worry about aspect X that TCP takes care of' and ultimately get bitten. IPMI to me is a good example. They have the notion of retries (more of an afterthought), and have sequence numbers above and beyond what UDP offers. The problem is that retries for most packets increment that sequence number, so a retry is indistinuishable from a reissuance of the same command. For some contents, this can be very undesirable.

When something with as much high-profile support as IPMI ends up with such shortcomings, it goes to show that people easily fail to understand why this aspect or that of TCP is not applicable to their use.

As to TCP over UDP, that's an example of a very bad sounding ideas. Redundant features of TCP and UDP. It's not as bad as TCP over IP over PPP over SSH which is over TCP (multiple reliable protocols on top of each other), but still, if you wanted to be a better TCP than TCP, the place to implement would be at the same layer, on top of IP protocol, not on top of UDP.

Re:TCP is underestimated... (1)

petermgreen (876956) | more than 6 years ago | (#19710745)

As to TCP over UDP, that's an example of a very bad sounding ideas.
I disagree

Redundant features of TCP and UDP.
all UDP provides is checksums and application multiplexing. If you really wanted you could tweak the version of TCP you were adapting to run over UDP to remove those features but even if you don't they are very low overhead.

It's not as bad as TCP over IP over PPP over SSH which is over TCP (multiple reliable protocols on top of each other),
Yes multiple reliable protocols over each other is generally bad unless they are carefully designed to play nice and there is a good reason for doing it (such as one bad link in the middle of a long chain) but i don't see how that is relavent here.

but still, if you wanted to be a better TCP than TCP, the place to implement would be at the same layer, on top of IP protocol, not on top of UDP.
To implement a protocol directly over IP afaict either requires you to use raw sockets or to implement it as part of the OS. The former raises all sorts of privilages/security concerns and the latter requires you to convince all your users to swtich/upgrade thier OS. The hard truth is that for most applications the choice comes down to either take the TCP implementation the OS gives you which may or may not support the latest TCP improvements or build over UDP.

FTP obsolete ? (1)

bheading (467684) | more than 6 years ago | (#19709739)

Do any other slashdotters feel, like myself, that this device is a bit of a damp squib given that FTP is somewhat obsolete ? HTTP provides upload as well as download capabilities, and in any tests I've done I get the same download speed as with FTP. Since it doesn't have a stupid protocol I can easily tunnel it as required.

Re:FTP obsolete ? (1)

Idbar (1034346) | more than 6 years ago | (#19710149)

The reason for using FTP is a metric for long-lived connections, or connections that will transmit lots of data. It's not because FTP is or not obsolete, it applies to SCP or downloads through http (like those ISO files many download).

So, no, it's not related to TCP but in the sense of transmission of lots of data during a single connection.

Congestion Control (4, Informative)

pc486 (86611) | more than 6 years ago | (#19709859)

FastTCP isn't really a full TCP replacement but rather a congestion control algorithm. There are many competitors to FastTCP, including BIC/CUBIC (common Linux default), High-Speed TCP, H-TCP, Hybla, and many others. Microsoft calls their version Compound TCP (available in Vista).

If you use Linux, have (CU)BIC loaded, correctly setup your NIC, and tune your TCP settings (rx/tx mem, queuelen, and such) then there is be no way for FastSoft to claim a 15-20x speedup improvement. I've done full 10 gigabit transmissions with a 150ms RTT using that kind of setup. FastSoft's device doesn't even support 10 gigabit, and their 1 gigabit device still isn't released.

This article is nothing other than a Slashadvertisment.

15x - 20x faster? i smell some bullshit (1)

timmarhy (659436) | more than 6 years ago | (#19710545)

20x faster huh, so they could make my modem go at broadband speeds? i think i've seen claims like that on annoying flashing ads on the web. fuck them.

patent pending (0)

Anonymous Coward | more than 6 years ago | (#19710585)

So basically after tahoe, reno and vegas, fasttcp is a nex gen congestion algorithm.
Benefits of fasttcp can be observed at 1gbps an up.

All this good an well but will Linux suport it ?
Not sure because it's patend pending technology.

http://www.freepatentsonline.com/20070121511.html [freepatentsonline.com]
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...