Page 1 of 1
Low throughput
Posted: Mon May 02, 2011 7:17 pm
by thunderman
Hello all,
I established a vpn tunnel between 2 linux hosts and ran iperf and found out the throughput was about 50 Mbps. But the cross compiled version of openvpn for the router is giving a throughput of 10 Mbps. Is something wrong with the cross compilation? What are the general throughput numbers?
Thanks in advance..
Re: Low throughput
Posted: Mon May 02, 2011 7:37 pm
by krzee
i assume your cross compiled version is for an embedded device with small specs...
check if the embedded device is using high CPU on the core that openvpn runs on...
Re: Low throughput
Posted: Tue May 03, 2011 6:18 am
by thunderman
krzee wrote:i assume your cross compiled version is for an embedded device with small specs...
Yes correct.
krzee wrote:check if the embedded device is using high CPU on the core that openvpn runs on...
I think it's single core. When I checked the cpu usage, openvpn isn't taking much % of cpu. Please let me know if i'm not clear or need to provide more info.
Thanks
Re: Low throughput
Posted: Tue May 03, 2011 6:30 am
by janjust
please provide some specs on this embedded device - is it a dd-wrt based box? single core or dual core does not matter, clockspeed does.
Also, try running the tunnel without encryption on both ends
if this improves performance a lot then you know you are bound by encryption speed. If this does not improve performance then something else is going on; without hardware specs and without configuration files it is impossible to tell, however.
Re: Low throughput
Posted: Tue May 03, 2011 8:23 am
by thunderman
janjust wrote:Also, try running the tunnel without encryption on both ends
This did not improve the throughput much. Earlier it used to be 10 Mbps, now it's 13-14 Mbps.
And cpu usage of openvpn while running iperf is going up to 95%. I don't understand why the throughput is less.
Here are my specs.
cat /proc/cpuinfo
Code: Select all
system type : xxxx
processor : 0
cpu model : xXXx
BogoMIPS : 600.36
wait instruction : yes
microsecond timers : yes
tlb_entries : 64
extra interrupt vector : yes
hardware watchpoint : yes
ASEs implemented :
VCED exceptions : not available
VCEI exceptions : not available
cat /proc/meminfo
Code: Select all
MemTotal: 117040 kB
MemFree: 23468 kB
Buffers: 8032 kB
Cached: 30880 kB
SwapCached: 0 kB
Active: 27764 kB
Inactive: 24460 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 13316 kB
Mapped: 6228 kB
Slab: 36108 kB
SReclaimable: 4112 kB
SUnreclaim: 31996 kB
PageTables: 1064 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 58520 kB
Committed_AS: 42668 kB
VmallocTotal: 1073741824 kB
VmallocUsed: 3628 kB
VmallocChunk: 1073737376 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
So is it basically due to hardware? Openvpn uses tun/tap.. since it's not real interface..does it affect the transfer rate?
Thanks
Re: Low throughput
Posted: Tue May 03, 2011 8:46 am
by janjust
600 BogoMIPS suggest a CPU running at somewhere between 200 - 300 Mhz.
95% CPU usage during 'iperf' suggests that it's the CPU that is the bottleneck; try compiling and running simpletun (
http://www.cis.syr.edu/~wedu/seed/Labs/ ... impletun.c; see
http://backreference.org/2010/03/26/tun ... -tutorial/ for more details)
this is a
very basic tun application ; I'm curious what the performance will be when you run iperf over that - my guess is that even when using the most basic application you will see a huge performance drop.
Re: Low throughput
Posted: Wed May 04, 2011 10:29 am
by thunderman
Thank you for the reply..
I noted the throughput(running iperf) with "simpletun" tunnel. Bur unfortunatley the results are non productive. The maximum throughput I get was 1 Mbps and simpletun is not using morethan 5-6 % of the cpu.
I think the following details might be useful:
Here are the shared library info about both normal(my fefora's openvpn which is givibg around 60Mbps with iperf) and cross compiled versions:
Fedora:
Code: Select all
[root@localhost keys]# ldd /usr/local/sbin/openvpn
linux-gate.so.1 => (0x00dd3000)
libselinux.so.1 => /lib/libselinux.so.1 (0x003d3000)
libpkcs11-helper.so.1 => /usr/lib/libpkcs11-helper.so.1 (0x003f3000)
libssl.so.10 => /usr/lib/libssl.so.10 (0x00c75000)
libcrypto.so.10 => /usr/lib/libcrypto.so.10 (0x009b7000)
liblzo2.so.2 => /usr/lib/liblzo2.so.2 (0x00387000)
libdl.so.2 => /lib/libdl.so.2 (0x00363000)
libpthread.so.0 => /lib/libpthread.so.0 (0x0036a000)
libc.so.6 => /lib/libc.so.6 (0x001d5000)
/lib/ld-linux.so.2 (0x001af000)
libz.so.1 => /lib/libz.so.1 (0x003b3000)
libgssapi_krb5.so.2 => /lib/libgssapi_krb5.so.2 (0x00c45000)
libkrb5.so.3 => /lib/libkrb5.so.3 (0x00b7f000)
libcom_err.so.2 => /lib/libcom_err.so.2 (0x00b48000)
libk5crypto.so.3 => /lib/libk5crypto.so.3 (0x00b4d000)
libresolv.so.2 => /lib/libresolv.so.2 (0x004f5000)
libkrb5support.so.0 => /lib/libkrb5support.so.0 (0x00c3a000)
libkeyutils.so.1 => /lib/libkeyutils.so.1 (0x00b7a000)
Cross compiled(I did eu-readelf):
Code: Select all
NEEDED Shared library: [libcrypto.so]
NEEDED Shared library: [libssl.so]
NEEDED Shared library: [libdl.so.2]
NEEDED Shared library: [librt.so.1]
NEEDED Shared library: [libc.so.6]
I can see kerberos part is missing in the cross compiled verison, but it's statically linked with openssl .so's
Thanks
Re: Low throughput
Posted: Wed May 04, 2011 1:14 pm
by janjust
nah, the 'ldd' vs 'readelf' output does not show much ; on Fedora OpenSSL is linked against librkb5, which causes the extra dependency to show up.
I'm surprised about the low throughput when turning off all encryption (i.e. '--auth none --cipher none') ; that needs to be fixed first before you can choose a proper openssl cipher.
Can you try adding
to both client and server config to see if that does anything to throughput?
Re: Low throughput
Posted: Fri May 06, 2011 8:41 am
by thunderman
Sorry for the late reply..
I've tried those, but the throughput dropped. I even tried with various values/config options of --fragment, --tun-mtu , --mssfix and --fast-io etc. But nothing increased the throughput. The earlier reported values are the best I could get.
Thanks,
Mohan