Beleggrodion
2012-06-18 09:08:49 UTC
Hi there since now three weeks we have very poor performance on one of our
kvm server with zfs data storage.
The server is a compute module in a intel modular server. The disks are in
two storage pools with raid 5 with hotspare. A pool with SAS disks and a
pool with SATA disks.
Here are some current values from different tools:
=== snip top ===
top - 10:50:07 up 9:26, 3 users, load average: 10.18, 12.04, 8.71
Tasks: 595 total, 1 running, 594 sleeping, 0 stopped, 0 zombie
Cpu0 : 7.5%us, 5.2%sy, 0.0%ni, 60.0%id, 27.2%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu1 : 23.4%us, 4.2%sy, 0.0%ni, 72.1%id, 0.0%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu2 : 20.4%us, 5.3%sy, 0.0%ni, 29.8%id, 44.5%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu3 : 8.9%us, 4.1%sy, 0.0%ni, 87.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu4 : 14.3%us, 5.1%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu5 : 4.9%us, 3.0%sy, 0.0%ni, 75.7%id, 16.1%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu6 : 5.1%us, 4.1%sy, 0.0%ni, 62.4%id, 28.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu7 : 6.7%us, 2.0%sy, 0.0%ni, 38.7%id, 52.5%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu8 : 1.9%us, 3.6%sy, 0.0%ni, 94.5%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu9 : 4.5%us, 1.6%sy, 0.0%ni, 93.8%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu10 : 1.6%us, 4.5%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu11 : 0.3%us, 1.0%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu12 : 4.2%us, 2.6%sy, 0.0%ni, 81.4%id, 11.4%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu13 : 9.2%us, 3.6%sy, 0.0%ni, 87.1%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu14 : 3.7%us, 1.7%sy, 0.0%ni, 91.9%id, 2.7%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu15 : 2.6%us, 2.0%sy, 0.0%ni, 95.1%id, 0.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu16 : 3.0%us, 2.6%sy, 0.0%ni, 94.4%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu17 : 8.8%us, 2.6%sy, 0.0%ni, 88.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu18 : 5.5%us, 2.6%sy, 0.0%ni, 92.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu19 : 1.4%us, 1.4%sy, 0.0%ni, 96.2%id, 1.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu20 : 1.7%us, 2.0%sy, 0.0%ni, 95.0%id, 1.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu21 : 1.3%us, 1.6%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu22 : 71.2%us, 0.6%sy, 0.0%ni, 28.2%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu23 : 0.3%us, 0.6%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 74182208k total, 50949596k used, 23232612k free, 34500k buffers
Swap: 12582904k total, 0k used, 12582904k free, 15671956k cached
=== snip iostat ===
Linux 2.6.32-220.17.1.el6.x86_64 (lin-kvm3.4s-zg.intra)
18.06.2012 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
3.68 0.00 3.19 4.44 0.00 88.70
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.56 27.52 23.19 935830 788312
sdc 111.41 15127.15 4017.33 514331292 136591597
sdb 0.03 0.54 2.71 18410 92024
sdd 12.63 1102.78 616.74 37495293 20969352
=== vmstat ====
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
3 0 0 23175844 34508 15727028 0 0 333 96 50 36 4 3
89 4 0
=== iotop ===
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
3134 be/4 root 0.00 B/s 0.00 B/s 0.00 % 63.42 % zfs-fuse -p
/var/run/zfs-fuse.pid
28278 be/4 qemu 0.00 B/s 0.00 B/s 0.00 % 17.90 % qemu-kvm -S -M
rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name
cus-ts1 -uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive
file=/sasDATA1_srv2_vm/cunds/cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive
file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=drive-ide0-0-1,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
file=/mnt/iso/TwixTel46_DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=27,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0
-vnc 127.0.0.1:6 -k de-ch -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
23298 be/4 qemu 47.16 K/s 0.00 B/s 0.00 % 8.29 % qemu-kvm -S -M
rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name
cus-ts1 -uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive
file=/sasDATA1_srv2_vm/cunds/cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive
file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=drive-ide0-0-1,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
file=/mnt/iso/TwixTel46_DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=27,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0
-vnc 127.0.0.1:6 -k de-ch -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
3000 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3002 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3005 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3011 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3018 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3029 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3031 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3032 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3038 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3150 be/4 root 0.00 B/s 1509.20 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3155 be/4 root 0.00 B/s 141.49 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3160 be/4 root 0.00 B/s 2027.99 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3162 be/4 root 0.00 B/s 188.65 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3164 be/4 root 0.00 B/s 1037.58 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3165 be/4 root 0.00 B/s 70.74 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3166 be/4 root 0.00 B/s 94.33 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3169 be/4 root 0.00 B/s 1167.27 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3173 be/4 root 0.00 B/s 2.19 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3179 be/4 root 0.00 B/s 825.34 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3185 be/4 root 0.00 B/s 2.96 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3191 be/4 root 0.00 B/s 3.20 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3197 be/4 root 0.00 B/s 3.12 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
=== zfs list ===
NAME USED AVAIL REFER MOUNTPOINT
sasDATA1_srv2_vm 1.21T 1.19T 980G /sasDATA1_srv2_vm
sataDATA3_srv2_vm 268G 124G 187G /sataDATA3_srv2_vm
Some DD stats:
SAS:
524288000 Bytes (524 MB) kopiert, 26.9865 s, 19.4 MB/s
524288000 Bytes (524 MB) kopiert, 14.0033 s, 37.4 MB/s
524288000 Bytes (524 MB) kopiert, 10.3777 s, 50.5 MB/s
524288000 Bytes (524 MB) kopiert, 18.1738 s, 28.8 MB/s
SATA:
524288000 Bytes (524 MB) kopiert, 48.8307 s, 10.7 MB/s
524288000 Bytes (524 MB) kopiert, 9.0975 s, 57.6 MB/s
524288000 Bytes (524 MB) kopiert, 11.8184 s, 44.4 MB/s
Currently we don't know exactly what happened that the performance is so
bad. We also stopped some virtual systems but the performance isn't better.
One of the virtual systems is a terminal server for four peoples and the
work on it is very slow and completly not possible.
Has someone a idea how we can optimize the performance? We have two other
systems with the 80% same configuration. On another compute module on the
save modular server the performance is a little bit better but not the
best. On another modular server with a single compute module,
which is the backup for all other systems. (zfs snapshots). the performance
is on > 1000 MB/s.
kvm server with zfs data storage.
The server is a compute module in a intel modular server. The disks are in
two storage pools with raid 5 with hotspare. A pool with SAS disks and a
pool with SATA disks.
Here are some current values from different tools:
=== snip top ===
top - 10:50:07 up 9:26, 3 users, load average: 10.18, 12.04, 8.71
Tasks: 595 total, 1 running, 594 sleeping, 0 stopped, 0 zombie
Cpu0 : 7.5%us, 5.2%sy, 0.0%ni, 60.0%id, 27.2%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu1 : 23.4%us, 4.2%sy, 0.0%ni, 72.1%id, 0.0%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu2 : 20.4%us, 5.3%sy, 0.0%ni, 29.8%id, 44.5%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu3 : 8.9%us, 4.1%sy, 0.0%ni, 87.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu4 : 14.3%us, 5.1%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu5 : 4.9%us, 3.0%sy, 0.0%ni, 75.7%id, 16.1%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu6 : 5.1%us, 4.1%sy, 0.0%ni, 62.4%id, 28.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu7 : 6.7%us, 2.0%sy, 0.0%ni, 38.7%id, 52.5%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu8 : 1.9%us, 3.6%sy, 0.0%ni, 94.5%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu9 : 4.5%us, 1.6%sy, 0.0%ni, 93.8%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu10 : 1.6%us, 4.5%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu11 : 0.3%us, 1.0%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu12 : 4.2%us, 2.6%sy, 0.0%ni, 81.4%id, 11.4%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu13 : 9.2%us, 3.6%sy, 0.0%ni, 87.1%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu14 : 3.7%us, 1.7%sy, 0.0%ni, 91.9%id, 2.7%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu15 : 2.6%us, 2.0%sy, 0.0%ni, 95.1%id, 0.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu16 : 3.0%us, 2.6%sy, 0.0%ni, 94.4%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu17 : 8.8%us, 2.6%sy, 0.0%ni, 88.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu18 : 5.5%us, 2.6%sy, 0.0%ni, 92.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu19 : 1.4%us, 1.4%sy, 0.0%ni, 96.2%id, 1.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu20 : 1.7%us, 2.0%sy, 0.0%ni, 95.0%id, 1.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu21 : 1.3%us, 1.6%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu22 : 71.2%us, 0.6%sy, 0.0%ni, 28.2%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu23 : 0.3%us, 0.6%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 74182208k total, 50949596k used, 23232612k free, 34500k buffers
Swap: 12582904k total, 0k used, 12582904k free, 15671956k cached
=== snip iostat ===
Linux 2.6.32-220.17.1.el6.x86_64 (lin-kvm3.4s-zg.intra)
18.06.2012 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
3.68 0.00 3.19 4.44 0.00 88.70
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.56 27.52 23.19 935830 788312
sdc 111.41 15127.15 4017.33 514331292 136591597
sdb 0.03 0.54 2.71 18410 92024
sdd 12.63 1102.78 616.74 37495293 20969352
=== vmstat ====
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
3 0 0 23175844 34508 15727028 0 0 333 96 50 36 4 3
89 4 0
=== iotop ===
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
3134 be/4 root 0.00 B/s 0.00 B/s 0.00 % 63.42 % zfs-fuse -p
/var/run/zfs-fuse.pid
28278 be/4 qemu 0.00 B/s 0.00 B/s 0.00 % 17.90 % qemu-kvm -S -M
rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name
cus-ts1 -uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive
file=/sasDATA1_srv2_vm/cunds/cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive
file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=drive-ide0-0-1,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
file=/mnt/iso/TwixTel46_DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=27,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0
-vnc 127.0.0.1:6 -k de-ch -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
23298 be/4 qemu 47.16 K/s 0.00 B/s 0.00 % 8.29 % qemu-kvm -S -M
rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name
cus-ts1 -uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive
file=/sasDATA1_srv2_vm/cunds/cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive
file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=drive-ide0-0-1,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
file=/mnt/iso/TwixTel46_DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=27,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0
-vnc 127.0.0.1:6 -k de-ch -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
3000 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3002 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3005 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3011 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3018 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3029 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3031 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3032 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3038 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3150 be/4 root 0.00 B/s 1509.20 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3155 be/4 root 0.00 B/s 141.49 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3160 be/4 root 0.00 B/s 2027.99 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3162 be/4 root 0.00 B/s 188.65 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3164 be/4 root 0.00 B/s 1037.58 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3165 be/4 root 0.00 B/s 70.74 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3166 be/4 root 0.00 B/s 94.33 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3169 be/4 root 0.00 B/s 1167.27 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3173 be/4 root 0.00 B/s 2.19 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3179 be/4 root 0.00 B/s 825.34 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3185 be/4 root 0.00 B/s 2.96 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3191 be/4 root 0.00 B/s 3.20 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3197 be/4 root 0.00 B/s 3.12 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
=== zfs list ===
NAME USED AVAIL REFER MOUNTPOINT
sasDATA1_srv2_vm 1.21T 1.19T 980G /sasDATA1_srv2_vm
sataDATA3_srv2_vm 268G 124G 187G /sataDATA3_srv2_vm
Some DD stats:
SAS:
524288000 Bytes (524 MB) kopiert, 26.9865 s, 19.4 MB/s
524288000 Bytes (524 MB) kopiert, 14.0033 s, 37.4 MB/s
524288000 Bytes (524 MB) kopiert, 10.3777 s, 50.5 MB/s
524288000 Bytes (524 MB) kopiert, 18.1738 s, 28.8 MB/s
SATA:
524288000 Bytes (524 MB) kopiert, 48.8307 s, 10.7 MB/s
524288000 Bytes (524 MB) kopiert, 9.0975 s, 57.6 MB/s
524288000 Bytes (524 MB) kopiert, 11.8184 s, 44.4 MB/s
Currently we don't know exactly what happened that the performance is so
bad. We also stopped some virtual systems but the performance isn't better.
One of the virtual systems is a terminal server for four peoples and the
work on it is very slow and completly not possible.
Has someone a idea how we can optimize the performance? We have two other
systems with the 80% same configuration. On another compute module on the
save modular server the performance is a little bit better but not the
best. On another modular server with a single compute module,
which is the backup for all other systems. (zfs snapshots). the performance
is on > 1000 MB/s.
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/