FTP large files (Answering my own question)
[1/7] from: doug::vos::eds::com at: 5-Apr-2002 15:28
About FTP of large files.
Here is a quote from the rebol documetation...
(sorry I did not read...)
Transferring Large Files
Transferring large files requires special considerations. You may want to
transfer the file in chunks to reduce the memory required by your computer
and to provide user feedback while the transfer is happening.
Here is an example that downloads a very large binary file in chunks.
inp: open/binary/direct ftp://ftp.site.com/big-file.bmp
out: open/binary/new/direct %big-file.bmp
buf-size: 200000
buffer: make binary! buf-size + 2
while [not zero? size: read-io inp buffer buf-size][
write-io out buffer size
total: total + size
print ["transferred:" total]
]
Be sure to use the /direct refinement, otherwise the entire file will be
buffered internally by REBOL. The read-io and write-io functions allow reuse
of the buffer memory that has already allocated. Other functions such as
copy would allocate additional memory.
If the transfer fails, you can restart FTP from where it left off. To do so,
examine the output file or the size variable to determine where to restart
the transfer. Open the file again with a custom refinement that specifies
restart and the location from which to start the read. Here is an example of
the open function to use when the total variable indicates the length
already read:
inp: open/binary/direct/custom
ftp://ftp.site.com/big-file.bmp
reduce ['restart total]
You should note that restart only works for binary transfers. It cannot be
used with text transfers because the line terminator conversion that takes
place will cause incorrect offsets.
[2/7] from: doug:vos:eds at: 5-Apr-2002 16:43
Actually - this is really crazy...
Can you get this code to work?
rfile: ftp://bigserver/bigfile.zip
lfile: %/d/data/bigfiles/bigfile.zip
inp: open/binary/direct rfile
out: open/binary/new/direct lfile
total: 0
buf-size: 200'000'000 ; change this to any size you want
buffer: make binary! buf-size + 2
while [not zero? size: read-io inp buffer buf-size][
write-io out buffer size
total: total + size
print ["transferred:" total]
]
close inp
close out
[3/7] from: doug::vos::eds::com at: 5-Apr-2002 16:36
Re: FTP large files (Answering my own question) - NOT!
Just tried this code snippet that I pasted
from official rebol documentation.
It does not work very well.
1. Zip files transfered with this method won't open.
2. buf-size - is not what it says it is
3. I thought write-io and read-io were not supposed to be used
Any suggestion on how to improve it?
So, I guess I still have questions.
How are people transferring large files (eg. 100 to 200 meg)
with rebol ftp:// ?
============================================================================
==
About FTP of large files.
Here is a quote from the rebol documetation...
(sorry I did not read...)
Transferring Large Files
Transferring large files requires special considerations. You may want to
transfer the file in chunks to reduce the memory required by your computer
and to provide user feedback while the transfer is happening.
Here is an example that downloads a very large binary file in chunks.
inp: open/binary/direct ftp://ftp.site.com/big-file.bmp
out: open/binary/new/direct %big-file.bmp
buf-size: 200000
buffer: make binary! buf-size + 2
while [not zero? size: read-io inp buffer buf-size][
write-io out buffer size
total: total + size
print ["transferred:" total]
]
Be sure to use the /direct refinement, otherwise the entire file will be
buffered internally by REBOL. The read-io and write-io functions allow reuse
of the buffer memory that has already allocated. Other functions such as
copy would allocate additional memory.
If the transfer fails, you can restart FTP from where it left off. To do so,
examine the output file or the size variable to determine where to restart
the transfer. Open the file again with a custom refinement that specifies
restart and the location from which to start the read. Here is an example of
the open function to use when the total variable indicates the length
already read:
inp: open/binary/direct/custom
ftp://ftp.site.com/big-file.bmp
reduce ['restart total]
You should note that restart only works for binary transfers. It cannot be
used with text transfers because the line terminator conversion that takes
place will cause incorrect offsets.
[4/7] from: doug::vos::eds::com at: 5-Apr-2002 17:13
FTP Large Files (the saga continues...)
Still think FTP of large files should be improved
in rebol/core 2.6 since I cannot get any code
to work reliably in version 2.5.
Does not mean 2.5 has a bug,
just means that it is difficult for
a person to make it work reliably.
If it ain't simple and reliable,
it still needs to be improved (in my opinion.)
Or does someone have a function
that implements a simple and reliable way
to FTP large files using rebol/core 2.5
-DV
[5/7] from: doug::vos::eds::com at: 5-Apr-2002 17:25
FW: ftp dowlnoad of large file and read-io ....
Has this code been improved since Sept 12th 2001?
FTP of large files is apparently a "well guarded secret"
by rebolers.
I will fiddle some more with Petr's version here
and let you know my results.
-DV
[6/7] from: doug::vos::eds::com at: 5-Apr-2002 18:22
Reliable FTP of large binary files with REBOL/Core 2.5
Just completed three successful tests with this code:
;---------------------------------------------------------------------------
Transfered 160 Meg zip file in 4 minutes 53 seconds
(650 megs unzipped).
164205447
FTP Transfer took: 0:04:53
>>
;--------------------------------------------------------------
modified from code by someone at rebol tech
and Petr K.
REBOL []
ftp-large-binary-file: func [
{FTP large binary files - like the name says.
Could actually be used/modified to transfer large files without ftp
or using some other protocol...}
ftp-site [url! file!] {The ftp site we are getting file from.}
xfile-to [url! file!] {The target file we are writing to.}
/local buf-size buffer source target total size start
][
buf-size: 64000
buffer: make string! buf-size
source: open/binary/direct ftp-site
target: open/binary/new/direct xfile-to
total: 0
size: 0
start: now/time
while [not zero? size: read-io source buffer buf-size][
write-io target buffer size
clear buffer
total: total + size
print total
]
close source
close target
print ["FTP Transfer took: " now/time - start ]
]
;---------------------------------------------------------------------------
-----------
ftp-site: ftp://your.big.server.com/big-file.zip
lfile: %/d/data/local/big-file.zip
ftp-large-binary-file ftp-site lfile
[7/7] from: chalz:earthlink at: 5-Apr-2002 23:57
Re: FW: ftp dowlnoad of large file and read-io ....
> So I just rewrote core.pdf example of ftp download of large file, and it
> doesn't work, unless I clear the buffer each time. What am I doing
> wrong?
Isn't that kind of how buffers are *supposed* to work? At least, in any
programming language I've ever worked in (which, granted, has only been 3
others) or classes I've taken, buffers were written to, filled, dumped,
emptied, and the process repeated. Otherwise, it continues to grow until
something stops it (such as an overflow).