From another list: To start: we use lftp for lots of large file transfer at XYZCorp - pulling 1mb-10GB files via ftp, http, sftp, scp, etc. It generally works well, but it consumes a lot of CPU. Should HTTP download really peg a cpu core to 100%? That seems excessive to me. Happy to pay market rate plus a few beers to anyone who can help us figure out if this can be improved. We can provide a reusable sample and an environment that demonstrates this behavior. == End of message This is on a Linux server. If you're interested, I'll introduce you to the author. Chris