Re: scp xfer rate woes
- yeah... i assumed as much about the bottleneck in ssh/scp being the
computation of the stream cipher...
oh well... :) really the only reason i'm so used to using scp i think
is because it's so convenient when you've already got public-key auth
set up for ssh-ing around...
--- In firstname.lastname@example.org, "Laurent Gilson" <pumpkin@...> wrote:
> If you want fast transfers use ftp. It's not encrypted and it's
> is super-simple. The samba-protocol will already have a impact (not
> because of CPU-power, but because the protocol just sucks big time).
true... i've been using ftp so far and it works great. :) and just
when i thought i was rid of that damn command line and lpwd, lcd, lls ;)
as for samba, i've got very little use for it... i have a linux server
with ~650gb of storage on my lan, which i regularly access from
windows, and frankly samba is so dodgy that i've never really bothered
using it... i generally rely on scp/sftp for uploads, and have apache
indexing the shared disks for downloading files to windows.
> > does anyone have a solution that works for you as far as getting
> > ssh/scp running smoothly?
> ssh uses a cipher called 3DES as default. It also supports DES (3x
> but insecure) and blowfish (harder to break then 3DES but usually
> to compute). So try scp -c des ... or scp -c blowfish, depending on
> needs. It may not work, depending on the setup of your ssh/ssl-libs.
> ciphers may be available, check the sources on both ends of the
thanks for the tip here... i'm going to try this out and see if i get
any significant performance gains... as you said though, for the most
part i'm resigned to living with plain-old-ftp :)
as for more transparent network storage connectivity, i use NFS
alot... which is one thing that i've been having mixed results with so
far on the slug. my issues so far seem to be with how the NFS-server
in the kernel on the slug buffers writes and then flushes them all at
once when the queue fills, which when copying a large file over NFS to
the slug has the effect of causing a large segment of the file to
transfer at very high speed, followed by a long hang while it is then
all written to disk. the average transfer rate over the entire copy
operation is around where local disk access and ftp, etc. peaks out,
but i haven't found a way to reduce the threshold to a more reasonable
level for interactive activities... any ideas?