Results 1 to 8 of 8
Thread: pureftpd limitation?
-
01-25-2004, 02:43 AM #1Junior Guru
- Join Date
- Jun 2003
- Location
- Central California, USA
- Posts
- 197
pureftpd limitation?
I'm trying to transfer a 5.5GB file over shell ftp, everytime the file transfer reaches 2GB of 5.5GB, the following message is displayed
421 Service not available, remote server has closed connection
One support person I talked with said it's a limitation with Red Hat's ext3 file system, and that it can't address a file greater then 2GB.. but the server is running RH 9.0 and from the research i've done, RH 7.3 and up support LFS (Large File Support) on ext3..
can anyone shed some light on the situation, as any help would be greatly appreciated..
-
01-25-2004, 03:12 AM #2Junior Guru
- Join Date
- Dec 2002
- Location
- Canada
- Posts
- 197
Hello,
You might wanna check the file size limit on the remote system kernel and proftpd.
to check the file size limit
log in as root using ssh and type
ulimit -a
sample output :
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7168
virtual memory (kbytes, -v) unlimited
abouve says the the file size is unlimited.
To check the file size limit on the proftpd.conf
you might wanna check
MaxRetrieveFileSize -- Restrict size of downloaded files
MaxStoreFileSize -- Restrict size of uploaded files
hope that helpsAffordable Linux Server Managament Solution
To order : http://www.linuxnetworkcare.com
Contact :[ AIM : xerophytev] [MSN : support@linuxnetworkcare.com][ Phone : 647-722-5303 ][skype:ksutha5]
-
01-25-2004, 03:28 AM #3Web Hosting Master
- Join Date
- Feb 2003
- Posts
- 1,162
xerophyte: hes using pureftpd, not proftpd.
IceCell: after verifying that ur kernel supports files larger than 2gb, pureftpd ALSO has 2b compiled with
--with-largefile
to enable files larger than 2gb to be transferred.
yes 2gb is the default limit of pureftpd simply becuz its not recommended to transfer such huge files over ftp.
-
01-25-2004, 04:48 AM #4Junior Guru
- Join Date
- Jun 2003
- Location
- Central California, USA
- Posts
- 197
thanks for the quick responses so far..
I have checked both items suggested.. and both have LFS (Large File Support)..
I recompiled pureftpd with the --with-largefile option.. just to make sure, and still no go..
anyway of checking to make sure the "--with-largefile" option took effect? something similar to the kernel check command?
thanks..
-
01-25-2004, 05:01 AM #5Web Hosting Master
- Join Date
- Feb 2003
- Posts
- 1,162
u did restart pureftpd after recompilation rite.
well i dont know a command that will verify if large file is enabled for pureftpd. afaik --with-largefile does this job.
so how r u transfering the files? from one server to another? please note that the ftp client, the server that runs the client, the ftp server, the server that runs the ftp server all have to support this to make it work.
i suggest that u split the file into 3 parts and transfer them individually.Last edited by qm8309; 01-25-2004 at 05:06 AM.
-
01-25-2004, 05:17 AM #6Junior Guru
- Join Date
- Jun 2003
- Location
- Central California, USA
- Posts
- 197
servers are in the same datacenter on 100mbps ports..
using ftp via ssh..
both servers have kernel support (as i created the 5GB .tar.gz on one of them) and I assume pureftp has to have large file support only on the server i'm connection to.. (transfering to)
is there anyway to transfer an entire directory via shell FTP? put and get only work with files not directories..
or any suggestion for breaking it up..
thanks.
-
01-25-2004, 05:35 AM #7Web Hosting Master
- Join Date
- Feb 2003
- Posts
- 1,162
to split the file
Code:split -b1000m filename filename2.
transfer all the generated filename2.* files over and reconstruct it by
Code:cat filename2.* > filename
u may then want to recompile pureftpd without large file as it produces some overhead and can make the server slower.
-
01-25-2004, 10:02 AM #8Junior Guru
- Join Date
- Dec 2002
- Location
- Canada
- Posts
- 197
If you have problem using the ftp why not use the rsync, man rsync. I think that would be the best method.
hope that helpsAffordable Linux Server Managament Solution
To order : http://www.linuxnetworkcare.com
Contact :[ AIM : xerophytev] [MSN : support@linuxnetworkcare.com][ Phone : 647-722-5303 ][skype:ksutha5]