Discussion:
slow performance
(too old to reply)
Beleggrodion
2012-06-18 09:08:49 UTC
Permalink
Hi there since now three weeks we have very poor performance on one of our
kvm server with zfs data storage.

The server is a compute module in a intel modular server. The disks are in
two storage pools with raid 5 with hotspare. A pool with SAS disks and a
pool with SATA disks.

Here are some current values from different tools:

=== snip top ===
top - 10:50:07 up 9:26, 3 users, load average: 10.18, 12.04, 8.71
Tasks: 595 total, 1 running, 594 sleeping, 0 stopped, 0 zombie
Cpu0 : 7.5%us, 5.2%sy, 0.0%ni, 60.0%id, 27.2%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu1 : 23.4%us, 4.2%sy, 0.0%ni, 72.1%id, 0.0%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu2 : 20.4%us, 5.3%sy, 0.0%ni, 29.8%id, 44.5%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu3 : 8.9%us, 4.1%sy, 0.0%ni, 87.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu4 : 14.3%us, 5.1%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu5 : 4.9%us, 3.0%sy, 0.0%ni, 75.7%id, 16.1%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu6 : 5.1%us, 4.1%sy, 0.0%ni, 62.4%id, 28.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu7 : 6.7%us, 2.0%sy, 0.0%ni, 38.7%id, 52.5%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu8 : 1.9%us, 3.6%sy, 0.0%ni, 94.5%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu9 : 4.5%us, 1.6%sy, 0.0%ni, 93.8%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu10 : 1.6%us, 4.5%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu11 : 0.3%us, 1.0%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu12 : 4.2%us, 2.6%sy, 0.0%ni, 81.4%id, 11.4%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu13 : 9.2%us, 3.6%sy, 0.0%ni, 87.1%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu14 : 3.7%us, 1.7%sy, 0.0%ni, 91.9%id, 2.7%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu15 : 2.6%us, 2.0%sy, 0.0%ni, 95.1%id, 0.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu16 : 3.0%us, 2.6%sy, 0.0%ni, 94.4%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu17 : 8.8%us, 2.6%sy, 0.0%ni, 88.6%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu18 : 5.5%us, 2.6%sy, 0.0%ni, 92.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu19 : 1.4%us, 1.4%sy, 0.0%ni, 96.2%id, 1.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu20 : 1.7%us, 2.0%sy, 0.0%ni, 95.0%id, 1.3%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu21 : 1.3%us, 1.6%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu22 : 71.2%us, 0.6%sy, 0.0%ni, 28.2%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu23 : 0.3%us, 0.6%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 74182208k total, 50949596k used, 23232612k free, 34500k buffers
Swap: 12582904k total, 0k used, 12582904k free, 15671956k cached

=== snip iostat ===
Linux 2.6.32-220.17.1.el6.x86_64 (lin-kvm3.4s-zg.intra)
18.06.2012 _x86_64_ (24 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
3.68 0.00 3.19 4.44 0.00 88.70

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.56 27.52 23.19 935830 788312
sdc 111.41 15127.15 4017.33 514331292 136591597
sdb 0.03 0.54 2.71 18410 92024
sdd 12.63 1102.78 616.74 37495293 20969352

=== vmstat ====
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
3 0 0 23175844 34508 15727028 0 0 333 96 50 36 4 3
89 4 0

=== iotop ===
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
3134 be/4 root 0.00 B/s 0.00 B/s 0.00 % 63.42 % zfs-fuse -p
/var/run/zfs-fuse.pid
28278 be/4 qemu 0.00 B/s 0.00 B/s 0.00 % 17.90 % qemu-kvm -S -M
rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name
cus-ts1 -uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive
file=/sasDATA1_srv2_vm/cunds/cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive
file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=drive-ide0-0-1,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
file=/mnt/iso/TwixTel46_DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=27,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0
-vnc 127.0.0.1:6 -k de-ch -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
23298 be/4 qemu 47.16 K/s 0.00 B/s 0.00 % 8.29 % qemu-kvm -S -M
rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name
cus-ts1 -uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive
file=/sasDATA1_srv2_vm/cunds/cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive
file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=drive-ide0-0-1,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
file=/mnt/iso/TwixTel46_DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=27,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0
-vnc 127.0.0.1:6 -k de-ch -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
3000 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3002 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3005 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3011 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3018 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3029 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3031 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3032 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3038 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3150 be/4 root 0.00 B/s 1509.20 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3155 be/4 root 0.00 B/s 141.49 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3160 be/4 root 0.00 B/s 2027.99 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3162 be/4 root 0.00 B/s 188.65 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3164 be/4 root 0.00 B/s 1037.58 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3165 be/4 root 0.00 B/s 70.74 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3166 be/4 root 0.00 B/s 94.33 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3169 be/4 root 0.00 B/s 1167.27 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3173 be/4 root 0.00 B/s 2.19 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3179 be/4 root 0.00 B/s 825.34 K/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3185 be/4 root 0.00 B/s 2.96 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3191 be/4 root 0.00 B/s 3.20 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid
3197 be/4 root 0.00 B/s 3.12 M/s 0.00 % 0.00 % zfs-fuse -p
/var/run/zfs-fuse.pid

=== zfs list ===
NAME USED AVAIL REFER MOUNTPOINT
sasDATA1_srv2_vm 1.21T 1.19T 980G /sasDATA1_srv2_vm
sataDATA3_srv2_vm 268G 124G 187G /sataDATA3_srv2_vm

Some DD stats:
SAS:
524288000 Bytes (524 MB) kopiert, 26.9865 s, 19.4 MB/s
524288000 Bytes (524 MB) kopiert, 14.0033 s, 37.4 MB/s
524288000 Bytes (524 MB) kopiert, 10.3777 s, 50.5 MB/s
524288000 Bytes (524 MB) kopiert, 18.1738 s, 28.8 MB/s

SATA:
524288000 Bytes (524 MB) kopiert, 48.8307 s, 10.7 MB/s
524288000 Bytes (524 MB) kopiert, 9.0975 s, 57.6 MB/s
524288000 Bytes (524 MB) kopiert, 11.8184 s, 44.4 MB/s

Currently we don't know exactly what happened that the performance is so
bad. We also stopped some virtual systems but the performance isn't better.
One of the virtual systems is a terminal server for four peoples and the
work on it is very slow and completly not possible.

Has someone a idea how we can optimize the performance? We have two other
systems with the 80% same configuration. On another compute module on the
save modular server the performance is a little bit better but not the
best. On another modular server with a single compute module,
which is the backup for all other systems. (zfs snapshots). the performance
is on > 1000 MB/s.
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2012-06-18 21:05:19 UTC
Permalink
1st off:

1-a. What is the memory tuning (how much is there and how much is being
used?)

1-b. is a long-running operation running (think of zfs send/recv or a
scrub. See this in

zpool status -v
and/or
zpool history -i

2. Are you using dedup?

3. What is the general fs configuration (amount of mounts, presence of
clones, did you enable compression, case-insensitivity, altered
checksumming, quota, or copies=n policies?). The easiest way to answer
all these questions at once is by doing

sudo zfs get all -slocal,received
Post by Beleggrodion
Hi there since now three weeks we have very poor performance on one of
our kvm server with zfs data storage.
The server is a compute module in a intel modular server. The disks
are in two storage pools with raid 5 with hotspare. A pool with SAS
disks and a pool with SATA disks.
=== snip top ===
top - 10:50:07 up 9:26, 3 users, load average: 10.18, 12.04, 8.71
Tasks: 595 total, 1 running, 594 sleeping, 0 stopped, 0 zombie
Cpu0 : 7.5%us, 5.2%sy, 0.0%ni, 60.0%id, 27.2%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu1 : 23.4%us, 4.2%sy, 0.0%ni, 72.1%id, 0.0%wa, 0.0%hi,
0.3%si, 0.0%st
Cpu2 : 20.4%us, 5.3%sy, 0.0%ni, 29.8%id, 44.5%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu3 : 8.9%us, 4.1%sy, 0.0%ni, 87.0%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu4 : 14.3%us, 5.1%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu5 : 4.9%us, 3.0%sy, 0.0%ni, 75.7%id, 16.1%wa, 0.0%hi,
0.3%si, 0.0%st
Cpu6 : 5.1%us, 4.1%sy, 0.0%ni, 62.4%id, 28.3%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu7 : 6.7%us, 2.0%sy, 0.0%ni, 38.7%id, 52.5%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu8 : 1.9%us, 3.6%sy, 0.0%ni, 94.5%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu9 : 4.5%us, 1.6%sy, 0.0%ni, 93.8%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu10 : 1.6%us, 4.5%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu11 : 0.3%us, 1.0%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu12 : 4.2%us, 2.6%sy, 0.0%ni, 81.4%id, 11.4%wa, 0.0%hi,
0.3%si, 0.0%st
Cpu13 : 9.2%us, 3.6%sy, 0.0%ni, 87.1%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu14 : 3.7%us, 1.7%sy, 0.0%ni, 91.9%id, 2.7%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu15 : 2.6%us, 2.0%sy, 0.0%ni, 95.1%id, 0.3%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu16 : 3.0%us, 2.6%sy, 0.0%ni, 94.4%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu17 : 8.8%us, 2.6%sy, 0.0%ni, 88.6%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu18 : 5.5%us, 2.6%sy, 0.0%ni, 92.0%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu19 : 1.4%us, 1.4%sy, 0.0%ni, 96.2%id, 1.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu20 : 1.7%us, 2.0%sy, 0.0%ni, 95.0%id, 1.3%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu21 : 1.3%us, 1.6%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu22 : 71.2%us, 0.6%sy, 0.0%ni, 28.2%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu23 : 0.3%us, 0.6%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Mem: 74182208k total, 50949596k used, 23232612k free, 34500k buffers
Swap: 12582904k total, 0k used, 12582904k free, 15671956k cached
=== snip iostat ===
Linux 2.6.32-220.17.1.el6.x86_64 (lin-kvm3.4s-zg.intra)
18.06.2012 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
3.68 0.00 3.19 4.44 0.00 88.70
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.56 27.52 23.19 935830 788312
sdc 111.41 15127.15 4017.33 514331292 136591597
sdb 0.03 0.54 2.71 18410 92024
sdd 12.63 1102.78 616.74 37495293 20969352
=== vmstat ====
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us
sy id wa st
3 0 0 23175844 34508 15727028 0 0 333 96 50 36
4 3 89 4 0
=== iotop ===
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
3134 be/4 root 0.00 B/s 0.00 B/s 0.00 % 63.42 % zfs-fuse
-p /var/run/zfs-fuse.pid
28278 be/4 qemu 0.00 B/s 0.00 B/s 0.00 % 17.90 % qemu-kvm
-S -M rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1
-name cus-ts1 -uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive
file=/sasDATA1_srv2_vm/cunds/cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive
file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=drive-ide0-0-1,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
file=/mnt/iso/TwixTel46_DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-netdev tap,fd=27,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device
usb-tablet,id=input0 -vnc 127.0.0.1:6 -k de-ch -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
23298 be/4 qemu 47.16 K/s 0.00 B/s 0.00 % 8.29 % qemu-kvm
-S -M rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1
-name cus-ts1 -uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive
file=/sasDATA1_srv2_vm/cunds/cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive
file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=drive-ide0-0-1,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
file=/mnt/iso/TwixTel46_DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-netdev tap,fd=27,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device
usb-tablet,id=input0 -vnc 127.0.0.1:6 -k de-ch -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
3000 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3002 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3005 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3011 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3018 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3029 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3031 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3032 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3038 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3150 be/4 root 0.00 B/s 1509.20 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3155 be/4 root 0.00 B/s 141.49 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3160 be/4 root 0.00 B/s 2027.99 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3162 be/4 root 0.00 B/s 188.65 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3164 be/4 root 0.00 B/s 1037.58 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3165 be/4 root 0.00 B/s 70.74 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3166 be/4 root 0.00 B/s 94.33 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3169 be/4 root 0.00 B/s 1167.27 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3173 be/4 root 0.00 B/s 2.19 M/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3179 be/4 root 0.00 B/s 825.34 K/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3185 be/4 root 0.00 B/s 2.96 M/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3191 be/4 root 0.00 B/s 3.20 M/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
3197 be/4 root 0.00 B/s 3.12 M/s 0.00 % 0.00 % zfs-fuse
-p /var/run/zfs-fuse.pid
=== zfs list ===
NAME USED AVAIL REFER MOUNTPOINT
sasDATA1_srv2_vm 1.21T 1.19T 980G /sasDATA1_srv2_vm
sataDATA3_srv2_vm 268G 124G 187G /sataDATA3_srv2_vm
524288000 Bytes (524 MB) kopiert, 26.9865 s, 19.4 MB/s
524288000 Bytes (524 MB) kopiert, 14.0033 s, 37.4 MB/s
524288000 Bytes (524 MB) kopiert, 10.3777 s, 50.5 MB/s
524288000 Bytes (524 MB) kopiert, 18.1738 s, 28.8 MB/s
524288000 Bytes (524 MB) kopiert, 48.8307 s, 10.7 MB/s
524288000 Bytes (524 MB) kopiert, 9.0975 s, 57.6 MB/s
524288000 Bytes (524 MB) kopiert, 11.8184 s, 44.4 MB/s
Currently we don't know exactly what happened that the performance is
so bad. We also stopped some virtual systems but the performance isn't
better. One of the virtual systems is a terminal server for four
peoples and the work on it is very slow and completly not possible.
Has someone a idea how we can optimize the performance? We have two
other systems with the 80% same configuration. On another compute
module on the save modular server the performance is a little bit
better but not the best. On another modular server with a single
compute module,
which is the backup for all other systems. (zfs snapshots). the
performance is on > 1000 MB/s.
--
To visit our Web site, click on http://zfs-fuse.net/
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Beleggrodion
2012-06-19 06:50:48 UTC
Permalink
Hi there

Thanks for your answer.

1a) I think there shoul'd be enough memory:

free
total used free shared buffers
cached
Mem: 74182208 66783048 7399160 0 30852
29991288
-/+ buffers/cache: 36760908
37421300

Swap: 12582904 0
12582904

vmstat

procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----

r b swpd free buff cache si so bi bo in cs us sy id
wa
st

1 0 0 7396464 30852 29994664 0 0 224 103 8 6 4 3
90 3 0

1b) At the moment not, see informations below:

zpool status -v
pool: sasDATA1_srv2_vm
state:
ONLINE

scrub: none
requested

config:



NAME STATE READ WRITE
CKSUM

sasDATA1_srv2_vm ONLINE 0 0
0

sdc ONLINE 0 0
0



errors: No known data
errors



pool:
sataDATA3_srv2_vm

state:
ONLINE

scrub: none
requested

config:



NAME STATE READ WRITE
CKSUM

sataDATA3_srv2_vm ONLINE 0 0
0

sdd ONLINE 0 0
0



errors: No known data errors

zfs history -i

2012-06-18.12:00:01 [internal snapshot txg:21093957] dataset = 164
2012-06-18.12:00:01 zfs snapshot ***@2012-06-18-10-00-01
2012-06-18.19:00:03 [internal snapshot txg:21252286] dataset = 167
2012-06-18.19:00:03 zfs snapshot ***@2012-06-18-17-00-02
2012-06-19.01:00:03 [internal snapshot txg:22007599] dataset = 171
2012-06-19.01:00:03 zfs snapshot ***@2012-06-18-23-00-02
2012-06-19.07:00:02 [internal snapshot txg:22020372] dataset = 173
2012-06-19.07:00:02 zfs snapshot ***@2012-06-19-05-00-02

2). Now we don't use dedup

3) I try to reproduce the exact configuration and home i don't forget
something.

The mounts are the following:
Dateisystem Size Used Avail Use% Eingehängt auf
/dev/sda3 38G 19G 17G 53% /
tmpfs 36G 0 36G 0% /dev/shm
/dev/sda1 194M 107M 78M 58% /boot
/dev/sdb1 100G 49G 52G 49% /mnt/iso
sasDATA1_srv2_vm 2.2T 981G 1.2T 45% /sasDATA1_srv2_vm
sataDATA3_srv2_vm 306G 188G 118G 62% /sataDATA3_srv2_vm

I think there are no clones on the system at the moment and compression is
complete disabled. We
don't use case-insensitivity and we only use the normal checksum process of
zfs. quota's are disabled and
as i see there is no copies enabled.

And for your command, on our CentOS installation there is no output for
your command, but think the follow
is the same output what you wanted:

zfs get all sasDATA1_srv2_vm
NAME PROPERTY VALUE SOURCE
sasDATA1_srv2_vm type filesystem -
sasDATA1_srv2_vm creation Sam Mär 24 9:48 2012 -
sasDATA1_srv2_vm used 1.23T -
sasDATA1_srv2_vm available 1.17T -
sasDATA1_srv2_vm referenced 980G -
sasDATA1_srv2_vm compressratio 1.00x -
sasDATA1_srv2_vm mounted yes -
sasDATA1_srv2_vm quota none default
sasDATA1_srv2_vm reservation none default
sasDATA1_srv2_vm recordsize 128K default
sasDATA1_srv2_vm mountpoint /sasDATA1_srv2_vm default
sasDATA1_srv2_vm sharenfs off default
sasDATA1_srv2_vm checksum on default
sasDATA1_srv2_vm compression off default
sasDATA1_srv2_vm atime on default
sasDATA1_srv2_vm devices on default
sasDATA1_srv2_vm exec on default
sasDATA1_srv2_vm setuid on default
sasDATA1_srv2_vm readonly off default
sasDATA1_srv2_vm zoned off default
sasDATA1_srv2_vm snapdir hidden default
sasDATA1_srv2_vm aclmode groupmask default
sasDATA1_srv2_vm aclinherit restricted default
sasDATA1_srv2_vm canmount on default
sasDATA1_srv2_vm xattr on default
sasDATA1_srv2_vm copies 1 default
sasDATA1_srv2_vm version 4 -
sasDATA1_srv2_vm utf8only off -
sasDATA1_srv2_vm normalization none -
sasDATA1_srv2_vm casesensitivity sensitive -
sasDATA1_srv2_vm vscan off default
sasDATA1_srv2_vm nbmand off default
sasDATA1_srv2_vm sharesmb off default
sasDATA1_srv2_vm refquota none default
sasDATA1_srv2_vm refreservation none default
sasDATA1_srv2_vm primarycache all default
sasDATA1_srv2_vm secondarycache all default
sasDATA1_srv2_vm usedbysnapshots 276G -
sasDATA1_srv2_vm usedbydataset 980G -
sasDATA1_srv2_vm usedbychildren 211M -
sasDATA1_srv2_vm usedbyrefreservation 0 -
sasDATA1_srv2_vm logbias latency default
sasDATA1_srv2_vm dedup off default
sasDATA1_srv2_vm mlslabel off -

Greetings
Beleggrodion
Post by sgheeren
1-a. What is the memory tuning (how much is there and how much is being
used?)
1-b. is a long-running operation running (think of zfs send/recv or a
scrub. See this in
zpool status -v
and/or
zpool history -i
2. Are you using dedup?
3. What is the general fs configuration (amount of mounts, presence of
clones, did you enable compression, case-insensitivity, altered
checksumming, quota, or copies=n policies?). The easiest way to answer
all these questions at once is by doing
sudo zfs get all -slocal,received
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Alexey Kurnosov
2012-06-18 23:35:02 UTC
Permalink
Guys, sorry for interfering with some way offtop IMO message (and my crappy English). But I can't stay calm.
zfs-fuse is not intended to use in high load production systems, as any other FUSE FS.
First of all, it don't have access to a low device level and consequently using some assumptions and tricks to work
correctly. Second, FUSE will never have at least same order perfomance values due to regular context switching.
Frankly speaking for me put zfs-fuse as a VPS storage is an recklessness.
If I vitally needs any zfs features, I would look at zfsonlinux (I heard rumors some bold men use recent versions
on a production) either real Solaris zfs as a central storage (using iSCSI over 10G/multipath).
Hi there since now three weeks we have very poor performance on one of our kvm
server with zfs data storage.
The server is a compute module in a intel modular server. The disks are in two
storage pools with raid 5 with hotspare. A pool with SAS disks and a pool with
SATA disks.
=== snip top ===
top - 10:50:07 up 9:26, 3 users, load average: 10.18, 12.04, 8.71
Tasks: 595 total, 1 running, 594 sleeping, 0 stopped, 0 zombie
Cpu0 : 7.5%us, 5.2%sy, 0.0%ni, 60.0%id, 27.2%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 23.4%us, 4.2%sy, 0.0%ni, 72.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu2 : 20.4%us, 5.3%sy, 0.0%ni, 29.8%id, 44.5%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 8.9%us, 4.1%sy, 0.0%ni, 87.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 14.3%us, 5.1%sy, 0.0%ni, 80.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu5 : 4.9%us, 3.0%sy, 0.0%ni, 75.7%id, 16.1%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu6 : 5.1%us, 4.1%sy, 0.0%ni, 62.4%id, 28.3%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 6.7%us, 2.0%sy, 0.0%ni, 38.7%id, 52.5%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu8 : 1.9%us, 3.6%sy, 0.0%ni, 94.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu9 : 4.5%us, 1.6%sy, 0.0%ni, 93.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 1.6%us, 4.5%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 0.3%us, 1.0%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu12 : 4.2%us, 2.6%sy, 0.0%ni, 81.4%id, 11.4%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu13 : 9.2%us, 3.6%sy, 0.0%ni, 87.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 3.7%us, 1.7%sy, 0.0%ni, 91.9%id, 2.7%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu15 : 2.6%us, 2.0%sy, 0.0%ni, 95.1%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu16 : 3.0%us, 2.6%sy, 0.0%ni, 94.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu17 : 8.8%us, 2.6%sy, 0.0%ni, 88.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu18 : 5.5%us, 2.6%sy, 0.0%ni, 92.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu19 : 1.4%us, 1.4%sy, 0.0%ni, 96.2%id, 1.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu20 : 1.7%us, 2.0%sy, 0.0%ni, 95.0%id, 1.3%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu21 : 1.3%us, 1.6%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu22 : 71.2%us, 0.6%sy, 0.0%ni, 28.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu23 : 0.3%us, 0.6%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 74182208k total, 50949596k used, 23232612k free, 34500k buffers
Swap: 12582904k total, 0k used, 12582904k free, 15671956k cached
=== snip iostat ===
Linux 2.6.32-220.17.1.el6.x86_64 (lin-kvm3.4s-zg.intra) 18.06.2012
_x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
3.68 0.00 3.19 4.44 0.00 88.70
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.56 27.52 23.19 935830 788312
sdc 111.41 15127.15 4017.33 514331292 136591597
sdb 0.03 0.54 2.71 18410 92024
sdd 12.63 1102.78 616.74 37495293 20969352
=== vmstat ====
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa
st
3 0 0 23175844 34508 15727028 0 0 333 96 50 36 4 3 89
4 0
=== iotop ===
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
3134 be/4 root 0.00 B/s 0.00 B/s 0.00 % 63.42 % zfs-fuse -p /var/
run/zfs-fuse.pid
28278 be/4 qemu 0.00 B/s 0.00 B/s 0.00 % 17.90 % qemu-kvm -S -M
rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name cus-ts1
-uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive file=/sasDATA1_srv2_vm/cunds/
cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=
drive-ide0-0-1,format=raw,cache=writethrough -device ide-drive,bus=ide.0,unit=
1,drive=drive-ide0-0-1,id=ide0-0-1 -drive file=/mnt/iso/TwixTel46_DVD.iso,if=
none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus
=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=27,id=hostnet0
-device rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=
0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=
serial0 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:6 -k de-ch -vga cirrus
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
23298 be/4 qemu 47.16 K/s 0.00 B/s 0.00 % 8.29 % qemu-kvm -S -M
rhel5.4.0 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name cus-ts1
-uuid 35fa8fe8-e2a7-11df-b4c9-001cc0367d48 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/cus-ts1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-no-kvm-pit-reinjection -no-shutdown -drive file=/sasDATA1_srv2_vm/cunds/
cus-ts1_disk1.raw,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive file=/sataDATA3_srv2_vm/cunds/cus-ts1_bkphdd1.raw,if=none,id=
drive-ide0-0-1,format=raw,cache=writethrough -device ide-drive,bus=ide.0,unit=
1,drive=drive-ide0-0-1,id=ide0-0-1 -drive file=/mnt/iso/TwixTel46_DVD.iso,if=
none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus
=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=27,id=hostnet0
-device rtl8139,netdev=hostnet0,id=net0,mac=54:52:00:6b:b8:a8,bus=pci.0,addr=
0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=
serial0 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:6 -k de-ch -vga cirrus
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
3000 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3002 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3005 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3011 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3018 be/4 root 2.21 M/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3029 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3031 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3032 be/4 root 1579.94 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3038 be/4 root 1509.20 K/s 0.00 B/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3150 be/4 root 0.00 B/s 1509.20 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3155 be/4 root 0.00 B/s 141.49 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3160 be/4 root 0.00 B/s 2027.99 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3162 be/4 root 0.00 B/s 188.65 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3164 be/4 root 0.00 B/s 1037.58 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3165 be/4 root 0.00 B/s 70.74 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3166 be/4 root 0.00 B/s 94.33 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3169 be/4 root 0.00 B/s 1167.27 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3173 be/4 root 0.00 B/s 2.19 M/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3179 be/4 root 0.00 B/s 825.34 K/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3185 be/4 root 0.00 B/s 2.96 M/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3191 be/4 root 0.00 B/s 3.20 M/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
3197 be/4 root 0.00 B/s 3.12 M/s 0.00 % 0.00 % zfs-fuse -p /var/
run/zfs-fuse.pid
=== zfs list ===
NAME USED AVAIL REFER MOUNTPOINT
sasDATA1_srv2_vm 1.21T 1.19T 980G /sasDATA1_srv2_vm
sataDATA3_srv2_vm 268G 124G 187G /sataDATA3_srv2_vm
524288000 Bytes (524 MB) kopiert, 26.9865 s, 19.4 MB/s
524288000 Bytes (524 MB) kopiert, 14.0033 s, 37.4 MB/s
524288000 Bytes (524 MB) kopiert, 10.3777 s, 50.5 MB/s
524288000 Bytes (524 MB) kopiert, 18.1738 s, 28.8 MB/s
524288000 Bytes (524 MB) kopiert, 48.8307 s, 10.7 MB/s
524288000 Bytes (524 MB) kopiert, 9.0975 s, 57.6 MB/s
524288000 Bytes (524 MB) kopiert, 11.8184 s, 44.4 MB/s
Currently we don't know exactly what happened that the performance is so bad.
We also stopped some virtual systems but the performance isn't better. One of
the virtual systems is a terminal server for four peoples and the work on it is
very slow and completly not possible.
Has someone a idea how we can optimize the performance? We have two other
systems with the 80% same configuration. On another compute module on the save
modular server the performance is a little bit better but not the best. On
another modular server with a single compute module,
which is the backup for all other systems. (zfs snapshots). the performance is
on > 1000 MB/s.
--
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2012-06-18 23:51:26 UTC
Permalink
Post by Alexey Kurnosov
Guys, sorry for interfering with some way offtop IMO message (and my crappy English). But I can't stay calm.
Oh yes you can, but this is the internet, so you figured you might as
well. And, you're welcome it :)
Post by Alexey Kurnosov
zfs-fuse is not intended to use in high load production systems, as any other FUSE FS.
I'd agree about not recommending zfs-fuse for production loads, but the
"high load" criterion seem to come from your imagination.
Post by Alexey Kurnosov
First of all, it don't have access to a low device level and consequently using some assumptions and tricks to work
correctly.
Please be specific. This resembles FUD
Post by Alexey Kurnosov
Second, FUSE will never have at least same order perfomance values due to regular context switching.
Context switching is a real cost. In practice I'm not sure whether
common load patterns would show it.
Post by Alexey Kurnosov
Frankly speaking for me put zfs-fuse as a VPS storage is an recklessness.
If I vitally needs any zfs features, I would look at zfsonlinux (I heard rumors some bold men use recent versions
on a production) either real Solaris zfs as a central storage (using iSCSI over 10G/multipath).
I've use Solaris for my own fileserver. I now switched to zfsonlinux for
it and it works fine. Only had glitches (hangs) with ftp shares. Might
be gone with new version.

Zfs-fuse is still working nicely on some small home offices I admin
(those are production systems yes) mainly because of the builtin
redundancy, error checking on commodity hardware and the ability to do
very efficient incremental offsite backups (using zfs send/receive).

Cheers,
Seth
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Alexey Kurnosov
2012-06-19 11:56:49 UTC
Permalink
Post by sgheeren
Post by Alexey Kurnosov
Guys, sorry for interfering with some way offtop IMO message (and my crappy English). But I can't stay calm.
Oh yes you can, but this is the internet, so you figured you might
as well. And, you're welcome it :)
Thanks for your approval:)
Post by sgheeren
Post by Alexey Kurnosov
zfs-fuse is not intended to use in high load production systems, as any other FUSE FS.
I'd agree about not recommending zfs-fuse for production loads, but
the "high load" criterion seem to come from your imagination.
The hardware config surely not SOHO. And my imagination shows me that VPS with low disk loads is a very rare case.
Should I call to a psychiatrist?:)
Post by sgheeren
Post by Alexey Kurnosov
First of all, it don't have access to a low device level and consequently using some assumptions and tricks to work
correctly.
Please be specific. This resembles FUD
Actually I am an amateur here.
Read ahead, NCQ lenght, locks because of IO wait, any IO sheduling as well, correct buffer flushing, fully involvement in memory
management (not just humble brk syscall), and so far. All this stuff can be handle by FUSE? If so, very impressive.
Post by sgheeren
Post by Alexey Kurnosov
Second, FUSE will never have at least same order perfomance values due to regular context switching.
Context switching is a real cost. In practice I'm not sure whether
common load patterns would show it.
It will show in that conditions (or already do). I am sure.
Post by sgheeren
Post by Alexey Kurnosov
Frankly speaking for me put zfs-fuse as a VPS storage is an recklessness.
If I vitally needs any zfs features, I would look at zfsonlinux (I heard rumors some bold men use recent versions
on a production) either real Solaris zfs as a central storage (using iSCSI over 10G/multipath).
I've use Solaris for my own fileserver. I now switched to zfsonlinux
for it and it works fine. Only had glitches (hangs) with ftp shares.
Might be gone with new version.
Zfs-fuse is still working nicely on some small home offices I admin
(those are production systems yes) mainly because of the builtin
redundancy, error checking on commodity hardware and the ability to
do very efficient incremental offsite backups (using zfs
send/receive).
Well, we end up speakig same things.:)
Post by sgheeren
Cheers,
Seth
--
To visit our Web site, click on http://zfs-fuse.net/
Loading...