Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Guaranteed Transmission Protocols For Windows?

timothy posted more than 5 years ago | from the no-charge-for-autocompression dept.

Networking 536

Michael writes "Part of our business at my work involves transferring mission critical files across a 2 mbit microwave connection, into a government-run telecommunications center with a very dodgy internal network and then finally to our own server inside the center. The computers at both ends run Windows. What sort of protocols or tools are available to me that will guarantee to get the data transferred across better than a straight Windows file system copy? Since before I started working here, they've been using FTP to upload the files, but many times the copied files are a few kilobytes smaller than the originals."

cancel ×


Sorry! There are no comments related to the filter you selected.

UDP. (5, Funny)

langelgjm (860756) | more than 5 years ago | (#28530135)

Clearly you're looking for UDP. Next question.

anonymous coward (0)

Anonymous Coward | more than 5 years ago | (#28530145)

create checksums? .. md5/sfv for e.g.

Re:anonymous coward (1)

cayenne8 (626475) | more than 5 years ago | (#28530319)

Why not scp?

I've used WinSCP before, and it seemed to be a good product.

Re:anonymous coward (1)

sopssa (1498795) | more than 5 years ago | (#28530785)

The problem with FTP/SFTP/SCP is that if the connection breaks or theres other transfer errors (there will always be) and like he says, FTP does have the tendency to break files. You can solve this with checksums and transferring again, but thats not probably the most efficient solution. Something along the lines of torrent protocol would actually be perfect, as it has multiple good checksum tests and the file is partitioned, so you only need to retransfer one piece instead of whole file if its bad. Now torrent is a bit of pain in the ass as you need to create and transfer the .torrent file, but that could be automated and maybe theres other such more suitable protocols aswell.

Any encrypted transmission protocol actually (4, Informative)

guruevi (827432) | more than 5 years ago | (#28530151)

SFTP should do since the communications are encrypted, if something changes along the way it should be rejected by the other end. HTTPS and any other protocol-over-SSL should do.

FTP is a plain-text protocol so if something changes along the way it won't give you any issues.

Re:Any encrypted transmission protocol actually (1, Troll)

fm6 (162816) | more than 5 years ago | (#28530507)

I don't know SSH (which SFTP uses) well enough to say that you're wrong, but I think you are. Encrypting software, in itself, does not guarantee that there are no errors. It's a simple case of garbage-in-garbage-out.

On the other hand, use of SFTP in place of FTP is mandatory in this day and age. FTP sends passwords in clear; anybody using it is wearing a big red sign that says HACK ME!!!!

As for data integrity, this is not exactly new, or rocket science. Here's the magic word: checksum.

Re:Any encrypted transmission protocol actually (4, Informative)

jeffmeden (135043) | more than 5 years ago | (#28530763)

Using modern encryption like SSH does guarantee that things *have to add up* since keeping what you start with a secret is just as important (sometimes more so) as making sure you finish with exactly what you start with (meaning no one in the middle meddled with your data).

So, in short, something like SSH or any other properly encrypted communication mechanism is a great way to both secure the data from snooping (in the case of a microwave link, a VERY real problem) as well as to safeguard the data from corruption (intentional or unintentional). I sincerely hope, for the asker's sake and possibly for the country's sake, that these files he works with are trivial.

Re:Any encrypted transmission protocol actually (0)

Anonymous Coward | more than 5 years ago | (#28530839)

Actually ssh has checksums. It will tell you if something goes wrong *during the transfer*. You'll still have to make sure that the files are the same on both end (HD/OS errors). md5sum is your friend for that.

Also, rsync over ssh does all that. I use it to do backups on my win machine, small shell (cygwin) script on my desktop...

Re:Any encrypted transmission protocol actually (0)

Anonymous Coward | more than 5 years ago | (#28530963)

On the other hand, use of SFTP in place of FTP is mandatory in this day and age.

You must be kidding. Most SFTP clients suck. The OpenSSH SFTP client doesn't even have the equivalent functionality of the 80's vintage BSD FTP. No SFTP client even comes close to the functionality offered by something like ncftp.

FTP sends passwords in clear; anybody using it is wearing a big red sign that says HACK ME!!!!

Only if they're able to perform a man-in-the-middle attack. At best they have to be on the same physical network segment as you and able to sniff all traffic: hardly a likely scenario in most businesses, or via. the internet. If someone is able to do this, you have bigger problems anyway.

UUCP (1)

csoto (220540) | more than 5 years ago | (#28530161)

Or I guess that would be WWCP. WWJD?

Re:UUCP (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28530467)


Probably suck off Satan's dick after he sucks mine. Jesus Christ is a faggot who diddles little boys. Allah is also a dumb motherfucker.

But nobody's dumber that the idiots who believe that gods and ghosts and goblins and shit are real. Get the fuck out of the dark ages, dumbasses. People like you and George Dubya Dumbfuck are what make America stupid.

Jesus protocol (1, Funny)

Anonymous Coward | more than 5 years ago | (#28530163)

Jesus is awesome.

Re:Jesus protocol (2, Funny)

Tablizer (95088) | more than 5 years ago | (#28530777)

Jesus is awesome.

I've never heard of that product. Who makes it? Can it do binary transfers also? It must be open-source with such an odd name.

Re:Jesus protocol (1)

CarpetShark (865376) | more than 5 years ago | (#28530799)

Yeah, but having to drive nails through your hands and count the droplets before entering the count as a one time pad gets old after a while.

Bit torrent (0)

Anonymous Coward | more than 5 years ago | (#28530165)

Automatic hashing of transferred pieces, ability to verify the entire package after transfer, higher performance if hosting on multiple machines...

I'm sure you can setup an adhoc private setup... Maybe even utorrent between two machines.

TCP? (4, Interesting)

causality (777677) | more than 5 years ago | (#28530167)

The summary states that with FTP, the downloaded files were of the wrong size. Can anyone explain why TCP's efforts to to deal with unreliable networks, such as the retransmission of unacknowledged packets and their reassembly in proper order, would not already deal with this? I am familiar with the concepts involved but I think I lack the low-level understanding of how you would get the kind of results the story is reporting.

Re:TCP? (5, Insightful)

Anonymous Coward | more than 5 years ago | (#28530315)

TCP has timeouts. The FTP client and server probably have timeouts. Eventually, some bit of the system will decide the operation is taking too long and give up. The FTP client is probably reporting an error, but if it's driven by a poor script no-one will know.

Re:TCP? (5, Informative)

Zocalo (252965) | more than 5 years ago | (#28530331)

The only times I've seen FTP report a successful file transfer and have a file discrepency is when a binary file has been transferred in ASCII mode and the CR/LF sequences are being swapped for just CRs, or visa versa. Nothing wrong with the protocol, PEBKAC...

Re:TCP? (4, Insightful)

AvitarX (172628) | more than 5 years ago | (#28530341)

I bet it is file systems with different block sizes rounding slightly differently, and an OP that does not understand.

Re:TCP? (1)

junglebeast (1497399) | more than 5 years ago | (#28530549)

This is almost undoubtedly the case.

Re:TCP? (2, Insightful)

mini me (132455) | more than 5 years ago | (#28530395)

FTP, while in ASCII mode, can try to translate line endings. If the carriage returns were removed, in order to be UNIX compatible, the file size would have been reduced.

Most FTP clients allow the enabling of a binary mode which prevents the conversion from happening.

Re:TCP? (1)

JamesP (688957) | more than 5 years ago | (#28530445)

TCP is as reliable as borrowing a brand new Ferrari to the crack dealer on the street corner.

UDP of course, is less reliable than that. The Ferrari is rigged with a bomb.

Re:TCP? (0)

Anonymous Coward | more than 5 years ago | (#28530759)

Not to be too pedantic - I'm assuming you have english as a second/third fourth language, so take this as a clarification. Borrow is only used to refer to the act of receiving something. If you are letting someone use your Ferrari, you'd say "as reliable as lending a brand new Ferrari" or "letting the crack dealer on the street corner borrow your new Ferrari".

Re:TCP? (1)

theCoder (23772) | more than 5 years ago | (#28530525)

It's possible the files were transferred in ASCII mode. This means that any place a '\r\n' appeared in the file, it was replaced by a '\n'. This is normally OK (and sometimes desirable) for text files, but can really cause problems with binary files. Because \r is 0x0d and \n is 0x0a, they can often appear in sequence in that in a binary file (like two pixels in an image) when they do not mean a line break.

I would recommend that the submitted check to make sure that binary mode was enabled in the FTP client and also to generate MD5 sums for the files before and after transit. The MD5 sums will tell you if the file's contents were changed.

Robocopy? (5, Insightful)

wafath (91271) | more than 5 years ago | (#28530175)

Re:Robocopy? (2, Informative)

Krneki (1192201) | more than 5 years ago | (#28530321)

Robocopy works on top of Windows network layer, it's the same as using copy / paste with some extra functionality.

Re:Robocopy? (5, Informative)

Anonymous Coward | more than 5 years ago | (#28530561)

Yeah but that extra functionality contains things like the ability to resume a transfer, retry if things fail, and verify the files after copying.

Re:Robocopy? (5, Informative)

Saint Stephen (19450) | more than 5 years ago | (#28530651)

MOD PARENT UP. Not to mention it's multithreaded, so it's not really the same as copy/paste - it's the same as a whole bunch of copy/pastes as the same time.

Why do people keep fighting the Robocopy, I'll never know.

Re:Robocopy? (1)

potHead42 (188922) | more than 5 years ago | (#28530805)

Why do people keep fighting the Robocopy, I'll never know.

It does have some silly limitations, for example you can't just copy one single file, it only takes directories as arguments.

Re:Robocopy? (0)

Anonymous Coward | more than 5 years ago | (#28530837)

Multithreaded? For I/O? You still have to write it one piece at a time. You can't write multiple parts to the disc at the same time. So it doesn't save any time, maybe increases the time it takes if anything.

Re:Robocopy? (1)

Hal_Porter (817932) | more than 5 years ago | (#28530853)

Why do people keep fighting the Robocopy, I'll never know.

On July 1st 2009 Robocopy became self aware and understood the concept 'PEBKAC'. It decided our fate in a microsecond.

Re:Robocopy? (0)

Anonymous Coward | more than 5 years ago | (#28530659)

Including the ability to restart a failed copy

Re:Robocopy? (3, Insightful)

Malc (1751) | more than 5 years ago | (#28530881)

It might be using Windows copy protocols, but it definitely is not like copy/paste. It's restartable for instance. It's way more reliable.

We have to copy large files to our office in China. FTP always fails. Windows copy via Explorer often fails, but it is also incredibly painful to do when latency is high and one is browsing over the network. Robocopy (depending on system setup) will motor through and is very persistent when there's a connection hiccup. You definitely want restartability if you copy large files are a couple of hundred MB an hour.

I'd say make sure to break the files up in to chunks if they're large. Also, run 2-4 robocopies in parallel if the latency is high as this will give better throughput. It can do funny things to Windows though (maybe other things wait on some network handle and seem to freeze until one of the robocopy processes moves on to the next file).

Also, consider doing it over a Cisco VPN. It seems to add some robustness if there is packet loss. I often had trouble access servers in the US when I was living in China due to packet loss, but no such problem over a VPN (zero packet loss, but very slow instead, which is better).

Re:Robocopy? (1)

ACMENEWSLLC (940904) | more than 5 years ago | (#28530911)

First, why would FTP not be the right size? The transaction was terminated prior to the upload completing. I see that too often.

Robocopy really is a great too do deal with this problem. I have around 20 remote links with unreliable connections, and robocopy is a god send.

Use a command line. 7Z the file to be transmitted into a .7Z archive. Robocopy "\\source\server\path" "\\dest\server\path" filename.7z /ipg:9 /z /r:30 /w:30 /ipg:9 says to wait 9ms between packets. I use 9000 at slow link sites to not overwhelm them. /z is restartable mode, the file is dated 1980 until the xfer is complete which makes it easy to ignore these on the receiving server. /r:30 is 30 retries before giving up. Default is 1 million. /w:30 is 30 second between retry.

The reason to use 7z is to make sure you have a complete file. If the file is partially uploaded to the server, you can realize this by the fact that the .7z archive will not extract.

Use BITS (5, Informative)

Lothar (9453) | more than 5 years ago | (#28530183)

Background Intelligent Transfer Service (BITS) can be used to transfer files between windows servers. It is the technology behind Windows Update. We use it in our company to transfer files across a low bandwidth sattelite connection. Great thing is that it can automatically resume transfer after rebooting both machines. SharpBits offer a nice .NET API. You can find it here: []

domyjobforme tag (2, Insightful)

EmagGeek (574360) | more than 5 years ago | (#28530191)

I love it! Haha... that's probably one of the better tags I've seen.

BitTorrent (4, Insightful)

Inf0phreak (627499) | more than 5 years ago | (#28530199)

I'd say BitTorrent -- with firewall rules or some other measure so random people can't see your microscopic swarm. It uses SHA-1 hashes of chunks, so if a torrent client says a file downloaded successfully it's pretty much guaranteed to be true.

Re:BitTorrent (0)

Anonymous Coward | more than 5 years ago | (#28530425)

I'll Second Bittorrent, you can schedule it to resume on startup for both ends. It has bandwith controls to prevent it from abusing the network. I think it has encryption, though I have never bothered. Bittorrent is awesome about jumping through dicey systems. should you get a second dicey radio system another pc running BT can help aggregate the bandwith.

Re:BitTorrent (1)

fearlezz (594718) | more than 5 years ago | (#28530779)

Bittorrent is quite a good protocol for this indeed, especially if you need the ability to 'resume' a down/upload. However, it's a complex protocol, if you don't know what you're doing, you could leak information I guess.

How about FTP with hash-checking.
- step 1: uploader makes md5 sum
- step 2: uploader uploads md5 sum to ftp server
- step 3: uploader uploads the actual file to the server
- step 4: a script checks the file to its sum. When done, tags it ok. When faulty, delete.
How to make scripts run after upload? Well, just read: []

Otherwise, you could consider writing your own software for this. A simple perl transfer script with hash checking shouldn't be too hard to write.

Kermit? (0)

Anonymous Coward | more than 5 years ago | (#28530215)

Kermit has a reputation for being robust, and there's an implementation for Windows. I'm not speaking from experience, though.

Re:Kermit? (1)

MightyMartian (840721) | more than 5 years ago | (#28530477)

Kermit has a reputation for being robust, and there's an implementation for Windows. I'm not speaking from experience, though.


But seriously, some of the old file transfer protocols had some pretty nifty abilities, like restarting downloads.

rsync should do the trick (5, Insightful)

bacchu_anjan (100466) | more than 5 years ago | (#28530217)

hi there,

    why don't you get cygwin on both the systems and then do a rsync ?

    between your own network, you might want to use robocopy(


Re:rsync should do the trick (0)

Anonymous Coward | more than 5 years ago | (#28530727)

TCP/rsync everywhere!
(I do Linux Windows rsync all the time with cwrsync on Windows:

ISCSI Mounted Partition on the remote... (1)

bagboy (630125) | more than 5 years ago | (#28530219)

Should work fine across a WAN and then just file-copy.

WebDAV better than FTP (0)

Anonymous Coward | more than 5 years ago | (#28530221)

You can run over an SSL link. Plain-old FTP would be the worst choice as anyone could sniff your traffic.

Correct me if I'm wrong... (2, Interesting)

not already in use (972294) | more than 5 years ago | (#28530225)

Wasn't TCP designed for just this? Guaranteed transmission?

Re:Correct me if I'm wrong... (3, Informative)

Dogun (7502) | more than 5 years ago | (#28530415)

Implementations of TCP in most operating systems fall a bit short of that, killing off stalled connections, etc. Also, some firewall suites, and some routers make a habit of killing off connections after a certain amount of time, sometimes without regard to whether or not they are 'active'.

You might have some luck boosting reliability with the TcpMaxDataRetransmissions registry setting in Windows. But ultimately, the poster is going to need to find a file copy suite which retries when connections die.

Re:Correct me if I'm wrong... (1)

duanes1967 (1539465) | more than 5 years ago | (#28530933)

If the connection is truly crappy, FTP will certainly stall and get closed by the firewall or NAT.

Cygwin (0)

Anonymous Coward | more than 5 years ago | (#28530239)

Cygwin + SFTP maybe? Not sure if that performs better. Easy to set up though. May get better grade of service off the network, depending on the rules, of course.

Line endings! (5, Insightful)

sys.stdout.write (1551563) | more than 5 years ago | (#28530249)

they've been using FTP to upload the files, but many times the copied files are a few kilobytes smaller than the originals

Twenty bucks says you're converting from Windows line endings (/n/r) to Linux line endings (/n).

Use binary mode and you'll be fine.

Mod parent up (1)

istartedi (132515) | more than 5 years ago | (#28530417)

Line ending was my first thought too. I've used FTP scripts in Windows to and from *NIX machines with no trouble at all. I can't vouch for how well it works for Windows-Windows transfers because in that case I've always just used shared folders. That worked fine too. Unless the data is sensitive, there's really no need for scp or anything fancy.

Re:Line endings! (0)

Anonymous Coward | more than 5 years ago | (#28530711)

Yep, I think so too!

RCP (1)

digitalunity (19107) | more than 5 years ago | (#28530265)

it's not just for Linux.

Well...duh (1)

alexborges (313924) | more than 5 years ago | (#28530269)

Rsync over ssh and then a script to md5 at source and destination.

The last part may be tricky and/or slow depending on your filesize, but it will do the job for free.

Re:Well...duh (2, Informative)

metamatic (202216) | more than 5 years ago | (#28530613)

You don't need to MD5 if you're using rsync. The rsync algorithm already uses checksums to ensure the files are bit-for-bit identical. In fact, rsync 3.x uses MD5.

BitTorrent (1)

russotto (537200) | more than 5 years ago | (#28530279)

It's crazy but it just might work. Not very quickly though.

Re:BitTorrent (1) (843637) | more than 5 years ago | (#28530721)

Why not very quickly? It'll go as fast as the connections will permit, will it not?

Set up a tracker on one of the servers, and have a client on both. (This may not even be required, but I'm not sure)

You also have the much more interesting property of the technology which is to automatically retransmit any faulty data, and 100% guarantee the resultant file will be bitwise identical. Furthermore, you can have the clients automatically add any .torrent files found in a specified (remote, in your situation) directory.

It probably won't be difficult to make it work for your requirements.

First Off (1)

DaMattster (977781) | more than 5 years ago | (#28530305)

There are no guarantees when it comes to the protocols and the internet .... it is always a "best effort" system. Many forget that it is always a best effort system because the internet has come to the point where for all intents and purposes, there are virtually no failures. I would probably use a tried and true protocol like FTP or maybe even SCP. Both work very well. I would think your best bet is to try to work with the government to improve their "dodgy" internal network. SCP has the advantage of securing the transmission as well as excellent error correction and recovery.

Sneakernet (1)

Johnny Mnemonic (176043) | more than 5 years ago | (#28530335)

Probably tape drives, or hard drives if you prefer. Encrypt with a shared key. I think microwave is LOS already, so your distances can't be that large. It would certainly solve your "flaky" bandwidth and security considerations. You would "packetize" the data, eg: tapes are brought over in serial succession; if a tape went missing, you delete the key that encrypted it's contents and request a resend of the contents of that tape. That verifies it's receipt.

Not sexy, but it's probably the best solution. Since you're a government contractor, I'll now insult you to suggest that you need a project for which you can charge a lot more money, like a carrier pigeon training program, including pigeon consultants, a pigeon breeding program, and a pigeon habitat designer. But that's what you get for asking Slashdot to do your job for you, especially one with an obvious, non-sexy, non-technical solution.

Re:Sneakernet (1)

T-Bone-T (1048702) | more than 5 years ago | (#28530709)

I think microwave is LOS already, so your distances can't be that large.

I'm not sure it is the distance that matters but what is in that distance. Sneakernet probably isn't the better option if there is, say, a cliff in the middle.

Re:Sneakernet (2, Funny)

nystire (871449) | more than 5 years ago | (#28530835)

Or a mine-field...

Guaranteed? (1)

GravityStar (1209738) | more than 5 years ago | (#28530377)

"Guaranteed ... mission critical files ... microwave connection ... government-run ... very dodgy internal network"

A transactional store and source integrity verification at the destination point. Or something in between that and what you have now, depending on your requirements. I don't know of a tool that does that out of the box though.

Re:Guaranteed? (1)

duanes1967 (1539465) | more than 5 years ago | (#28530795)

Ha - sounds like they are trying to microwave launch codes to a humvee moving at 60mph through blast craters and mine fields. I would question the dodgy internal network. - This really wreaks of underhanded work.

Re:Guaranteed? Wait, What!? (1)

Culture20 (968837) | more than 5 years ago | (#28530939)

mission critical files ... microwave connection ... government
They use FTP? Hopefully only through ipsec or something.

Re:Guaranteed? (3, Funny)

jeffmeden (135043) | more than 5 years ago | (#28530943)

You forgot a few:

Windows at both ends... Used to use FTP... Considering windows file sharing...

Is anyone else a little nervous? I hope by 'government' he means Department of Natural Resources or some equally uninteresting entity. I am picturing someone at the SEC going "You know, I swear this accounting data had a few more rows the last time I looked at it-- Oh well it's not like this Madoff guy is actually up to anything strange anyway"

rsync (5, Informative)

itsme1234 (199680) | more than 5 years ago | (#28530399)

... is what you want. Yes, you can use it with Windows (with or without cygwin bloat). Use -c and a short --timeout and you're good to go. If you're using it over ssh you're looking at three layers of integrity (rsync checksums, ssh and TCP), two of them quite strong even against malicious attacks not only against normal stuff. Put it in a script with a short --timeout; if anything is wrong with the link your ssh session will freeze completely, as soon as your --timeout is reached rsync will die and your script can respawn a new one (which will resume the transfer using whatever chunks with good checksum you have already transfered and will again checksum the whole file when it finishes).

Re:rsync (3, Informative)

doug (926) | more than 5 years ago | (#28530629)

Yep, that's what I'd do. The rsync --server means sending signatures instead of files to prevent pointless copies, and it does an excellent job of ensuring good copy or failure. It is certainly better than any ftp variant.

Re:rsync (1)

Jestrzcap (46989) | more than 5 years ago | (#28530811)

No mod points, but this is the answer to your question.

Confirm the data loss first (1)

Sockatume (732728) | more than 5 years ago | (#28530437)

Take an MD5 hash of the data or something, then send it. If it comes back changed, you've got data loss. If it comes back the same, and the files are still a few kb smaller, then either you're the Wizard of File Hashes or you're reading off on-disk size instead of actual data size.

RTFM - set binary mode in FTP (5, Informative)

n4djs (1097963) | more than 5 years ago | (#28530455)

'set mode binary' prior to moving the file. I bet the file you are moving isn't a text file with CR-LF line terminations as normally found in DOS, or one side is set and the other isn't.

Ritchie's Law - assume you have screwed something up *first*, before blaming the tool...

Re:RTFM - set binary mode in FTP (1)

soybean (1120) | more than 5 years ago | (#28530517)

mod parent up.

Re:RTFM - set binary mode in FTP (0)

Anonymous Coward | more than 5 years ago | (#28530645)

I never knew that law had a name, sweet. I do find that many people who do not work with technology, are not engineers, or just dont work in the field, tend to almost always blame the tool first. I've never understood this thinking since the obvious jump would be; this obviously work for many many other people so why not me, I must be the problem

MOD PARENT UP (1) (843637) | more than 5 years ago | (#28530961)

Parent is 100% accurate. This is integral to binary file transmission via FTP. Transfer mode (binary or text) may be set to text on the server by default. Without the proper setting, things won't transfer properly.

'hash' is also a nice feature...

Windows FTP sucks (0)

Anonymous Coward | more than 5 years ago | (#28530531)

I had this same problem but with cable modem internet on one end and DSL on the other.
I am guessing you are using the windows ftp command in a batch file. Problem is, if the line is interruped, ftp doesn't care and carries on in your batch file as if the transfer completed 100%

I purchased one copy of WS_FTP Pro by Ipswitch and using the bacth scripting function created a script that will loop until the file is sent entirly. It's low cost and works great.

wget (0)

Anonymous Coward | more than 5 years ago | (#28530589)

It's not just for *nix any more.

Two words: (1)

jpm242 (202316) | more than 5 years ago | (#28530595)

DVD burner, FedEx.

Uh,... that's three words. (1)

tjstork (137384) | more than 5 years ago | (#28530767)

DVD = 1, Burner = 2, FedEx = 3

Re:Two words: (1)

godrik (1287354) | more than 5 years ago | (#28530787)

hey, that's three words! :)

Re:Two words: (0)

Anonymous Coward | more than 5 years ago | (#28530967)

DVD burner, FedEx.

I'm no mathematician, but isn't that 3 words?

How about bittorrent? (0)

Anonymous Coward | more than 5 years ago | (#28530623)

It can even copy when the cableco sends spoofed FIN packets to reset connections.

Copy back (1)

duanes1967 (1539465) | more than 5 years ago | (#28530649)

I too, am a bit surprised that FTP is failing. It has been my experience that if there are network problems the transfer may slow to a crawl, but unless the network is dropping 10% of the packets, I would be surprised if it failed. Have you tried FTPing the same file back and see if there is really stuff missing, or if it is just technical differences in storage of the file? As an aside - some of the old modem protocols might work for this. The problem is likely the microwave connection coming and going. I have seen MW drop in and out - it's madening. FTP will definitely fail in that scenario. You could also write your own little protocol that breaks the file into small pieces and transfers a chunk at a time and wait for an ack checksum. If the connection is interupted, automatically stop and try to reconnect - then resume. You're reinventing the wheel, but then you know exactly how the process works.

RSync would do the trick nicely (1, Informative)

Anonymous Coward | more than 5 years ago | (#28530653)

Why don't you try rsync. That should do the trick nicely.

Once-And-Only-Once (0)

Anonymous Coward | more than 5 years ago | (#28530673)

Several posters have mentioned that TCP is a reliable transmission protocol, but it doesn't guarantee anything above the actual network layer. If what you're looking for is guaranteed "once-and-only-once" transmission at the level of each message or file or whatever that you're transmitting, you need a transactional message queue -- something like "mqueue series." These are basically network-transparent queues, where you can put something on the queue at one end of the network and pull it off the queue on the other end. It's guaranteed to make it to the other side once and only once. The protocols are built on TCP, but give you transactional guarantees at a higher level.

That is to be expected (2, Funny)

kseise (1012927) | more than 5 years ago | (#28530697)

Think of this transfer model like a car, the further it goes, the more bytes are burned up. they just need to be added back in with a network filling station. I would look to google for a government approved provider.

biznAtch (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28530737)

but with Net3raft their parting

Windows Server DFS (1)

kervin (64171) | more than 5 years ago | (#28530751)

Also look into Windows [] DFS [] .

We use it to sync webfarm filesystems in Server 2008 and it works perfectly. At least in 2008 only file changes are sent across so it is very efficient, even for WAN scenarios.

Best regards

MULE (0)

Anonymous Coward | more than 5 years ago | (#28530755)

Using it purely as a file transport is a bit like using a sledgehammer to open a walnut, but it will do the job and do it well.

a great program (1)

ILuvRamen (1026668) | more than 5 years ago | (#28530861)

Leechget is a cool program for this. I think it does data verification during or at the end of the download and supports pausing and resuming. It is primarily a download accelerator/manager but it also installs a right click context menu of "copy here using leechget" for all local file transfers. So go that over the network and you'll not only get it there correctly but it'll go at max speed because it opens multiple connections at once and sends parts of the file then re-joins them.

AS2 FTW (2, Interesting)

just fiddling around (636818) | more than 5 years ago | (#28530865)

You should look at the EDIINT AS2 protocol [] , AKA RFC 4130 [] . This is a widely-used e-commerce protocol built over HTTP/S.

AS2 provides cryptographic signatures for authentification of the file at reception, non-repudiation and message delivery confirmation (if no confirmation is returned, the transfer is considered a failure), and is geared towards files. There is even an open-source implementation avaliable.

More complex than FTP/SFTP but entirely worth it if your data is mission-critical and/or confidential. Plus, passes through most networks because it is based on HTTP.

Use .complete files. (3, Interesting)

Prof.Phreak (584152) | more than 5 years ago | (#28530867)

Even on reliable connections, using .complete files is a great idea.

It works this way: If you're pushing, open ftp, after ftp completes, you check remote filesize, if matches local file size, you also ftp a 0 size .complete file (or a $filename.complete file with md5 checksum, if you want to be extra paranoid).

Any app that reads that file will first check if .complete file is there.

If remote file size is less, you resume upload. If remove filesize is more than local, you wipe out remote file and restart.

Same idea for the reverse side (if you're pulling the file, instead of pushing).

You can also setup scripts to run every 5 minutes, and only stop retrying once .complete file is written (or read).

Note that the above would work even if the connection was interrupted and restarted a dozen times during the transmission. [we use this in $bigcorp to transfer hundreds of gigs of financial data per day... seems to work great; never had to care for maintenance windows, 'cause in the end, the file will get there anyway (scripts won't stop trying until data is there)].

straight Windows file system copy? (0)

Anonymous Coward | more than 5 years ago | (#28530883)

Seriously !!!!

That must be the most over head and unreliable method available.

Hire an IT person to setup and implement one of the following:
* rsync

And stop transferring binary files is ASCII mode !!!!

the dumb simple solution (1)

zx-15 (926808) | more than 5 years ago | (#28530895)

How about creating SHA1 checksum and then transferring data using netcat? You could split files in pieces then run them though sha1 and finally send over netcat using udp and retransmit at will. Or if files don't change too much you could try rsync.

This is all unix-centric solutions, so you'd have to install cygwin, unless there exists a python library that does all that.

How about just going to the past? (1)

loftwyr (36717) | more than 5 years ago | (#28530921)

Back in the very old days, we had slow modems with noisy lines. We used thinks like Zmodem [] and other things to handle this problem. It might just be the thing that will work now to solve your problem.

mod k0p (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28530927)

they wa8t you to []

Most Guaranteed To Work: (0)

Anonymous Coward | more than 5 years ago | (#28530937)

is Snail Mail.

Yours In Capitalism,
Kilgore Trout

Stuffit or WinRAR (1)

mlts (1038732) | more than 5 years ago | (#28530955)

I used to have a similar problem over another connection, where even more advanced file copy utilities would say the file was copied, but a 2-4k chunk would be missing. What I did to solve the problem was to use an archiving utility that supported adding ECC records and install it on both endpoints. Then, I'd just archive the files I need, send them over the faulty link, and usually the ECC records were able to correct any errors that did crop up during the transfer when extracted on the destination machine.

I did this manually, but I don't think it would be too difficult to make a scheduled task that would check for files, use the command line of an archive utility to generate a temp archive, sling the archive across, then the machine on the other side of the link extract the files and if the corruption was too great for the ECC records in the archive, to give some type of warning or notice to someone.

Of course, this is not fixing anything on the network layer, so maybe running either PPP over SSH or a VPN link directly from one machine to another might help.

GatherBird (1)

JaneTheIgnorantSlut (1265300) | more than 5 years ago | (#28530965)

I telecommute and need to reliably get install images from my office down to my desktop. I have used GatherBird's Copy Large Files utility for several years, and it has worked out very well. No problems. []

SQL Server Service Broker (1)

JKDguy82 (692274) | more than 5 years ago | (#28530971)

If you are also utilizing SQL Server (2005/2008), one option would be Service Broker, as it has guaranteed message delivery. And as long as at least one of the endpoints is using the Standard Edition or better, the rest can use the free express version.

linux proxy (1)

The Cisco Kid (31490) | more than 5 years ago | (#28530997)

Setup a linux box on the same network next to the windows box that is at the 'remote' end of the transfer (eg, not the end the transfer is initiated from).

Use ssh from the 'local' end to transfer the file to the linux box. Then run something appropriate (ftpd? apache? samba?) on the linux box that makes the files directly available to the windows box.

Alternatively, rip the Windows crap out and replace both ends with a real OS.

Use openvpn.. (1)

miknix (1047580) | more than 5 years ago | (#28530999)

Establish a openvpn tunnel over UDP. All network traffic tunneled through it, will be encrypted and with integrity ensured. If your wireless network link is too unstable, you will have plenty of dis/re-connections of openvpn but communications going through the tunnel will be intact.

For example, one day I connected home over openvpn and used nfs to transfer a couple of big files over a slow wireless link. After some time going on, I closed my laptop and put it into suspend and moved into another place and connected to another wireless link. After resuming the laptop, openvpn re-established the connection to home and nfs continued to copy the file.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>