How to speed up file transfer from server to server over the Internet?
Solution 1
Is this normal for your location? If you transfer 2GB to other sites, what sort of speed do you usually see? What is the network speed at your location and at the remote location?
The transfer speed over long distances will mostly depend on the network bandwidth available between the two locations and for any hop in between. You are transferring data over the Internet which means that speeds can vary greatly from time to time. The Internet does not guarantee a minimum speed.
Your best bets are:
- Before transferring the data, compress the data on your local server so that they are smaller in size.
- Use rsync to compress the data on the fly. See the examples at http://en.wikipedia.org/wiki/Rsync#Examples
- Break the data into chunks which you transfer one at a time. This won't speed up the data transfer, but it will make the transfer more fault tolerant (you won't need to restart from the beginning if the transfer fails 99% of the way through). Compression can help.
- Another ISP or network may offer better transfer speeds. Try another ISP or network.
Solution 2
SF community members may not be happy for taking you back to the stone age's of Unix, but for lower protocol overhead and a good compression, you might want to try a combination of dd + netcat + bzip2
. Yes, this is not going to be secure, so you have to close the ports for all except for the two nodes. No guarantee, no security, no authentication ... but it is faster.
1 - compress your file using bzip2
to get say , file.bz2
2 - Listen using netcat on node2
nc -l 6668 | dd of=/dir/file.bz2
-
Push it from node1
dd if=/dir/file.bz2 | nc node2 6668
Related videos on Youtube
Himanshu Matta
Updated on September 18, 2022Comments
-
Himanshu Matta over 1 year
How can I transfer a file from one server to another server with a great speed ? right now I am using FTP to transfer a file but it is taking so much time. To transfer a file of 2 GB its taking around 3 hours. Is there any other procedure to transfer a file which is faster than FTP. Server Location: One in India and another in US.
-
SuperMagic about 11 yearsNope. 2GB (giga-bytes base 2, more than 17 billion bits) transferred at 2Mb/s (mega-bits base 10) would take 8590 seconds, or 2 hours, 23 minutes. But.... when you include overhead (ethernet, tcp/ip and, shudders, FTP) it's more than 3 hours. I assumed (yes, I know what assuming does) a 20% overhead for FTP and the old stand-by 2 bit per byte for TCP/IP & Ethernet and get 3 hours 34 minutes.
-
-
Himanshu Matta about 11 yearsI made a script which will transfer a file regularly. I don't want to use root password therefore I am not using SCP and ftp is taking so much time... any other solution ??
-
Ruben about 11 yearsSCP works with any user account, not just root.
-
grassroot about 11 yearsSCP will not be any faster than FTP. On the contrary it includes encryption, which will slow down the transfer.
-
Himanshu Matta about 11 yearsI think something is going wrong. Normally, we use FTP to transfer a file. But in this case it is taking so much time. So can you tell me what are the possible conditions that can affect FTP tranfer
-
Himanshu Matta about 11 yearsI dont have idea about these tools...are you sure if I'll use these tools then I'll be able to transfer files with good speed.
-
Joel E Salas about 11 yearsUnless you're willing to invest in WAN optimization technology like Aspera or similar, you're stuck using gzip compression with rsync.
-
Himanshu Matta about 11 yearsA new twist in story .. I tested file transfer using FTP. If I am transferring small file from command line then it takes 1 or 2 second. But If I use command throuh php code then it takes 15 to 20 seconds.
-
Ross about 8 yearsI like it. But you can optionally use ncat to solve these pitfalls. That has SSL as well as IP address filtering. So you wont need to be concerned with any one listening or any errant packets corrupting your files on the listening end.