Discussion:
vfs.zfs.l2arc_noprefetch tunning.
(too old to reply)
Igor Hjelmstrom Vinhas Ribeiro
2012-03-14 14:00:02 UTC
Permalink
Hi!

I would like that the l2arc be used as much as possible, because I
have a device with terabytes of data sitting on top of a
loopback-mounted luks-encrypted s3backer virtual file - which means
reads and writes are both slow, and a fast local cache device (l2arc)
in the same pool.

One thing that it seems would help would be tuning the
l2arc_noprefetch and zfs.l2arc_write_max parameters mentioned here:
http://forums.freebsd.org/showthread.php?t=29907

Both seem to be fixed/hard coded in the source code for zfs-fuse
right now (I started by looking at
http://gitweb.zfs-fuse.net/?p=official;a=blob;f=src/lib/libzpool/arc.c;h=af9067d361864c4d85c4e628f5729a50cc961a52;hb=HEAD
).

Did anyone here did tests setting vfs.zfs.l2arc_noprefetch to false
instead so that streaming workloads will also be cached in l2arc, and
bumping up arc_write_max significantly? Is there any reason why it
could be a bad idea to do this (change both parameters)?

Regards,
igorhvr
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Manuel Amador
2012-03-14 14:05:06 UTC
Permalink
You do know that with that kind of layering, you have absolutely no data
safety guarantees from ZFS in case of a crash, right?
Post by Igor Hjelmstrom Vinhas Ribeiro
Hi!
I would like that the l2arc be used as much as possible, because I
have a device with terabytes of data sitting on top of a
loopback-mounted luks-encrypted s3backer virtual file - which means
reads and writes are both slow, and a fast local cache device (l2arc)
in the same pool.
One thing that it seems would help would be tuning the
http://forums.freebsd.org/showthread.php?t=29907
Both seem to be fixed/hard coded in the source code for zfs-fuse
right now (I started by looking at
http://gitweb.zfs-fuse.net/?p=official;a=blob;f=src/lib/libzpool/arc.c;h=af9
067d361864c4d85c4e628f5729a50cc961a52;hb=HEAD ).
Did anyone here did tests setting vfs.zfs.l2arc_noprefetch to false
instead so that streaming workloads will also be cached in l2arc, and
bumping up arc_write_max significantly? Is there any reason why it
could be a bad idea to do this (change both parameters)?
Regards,
igorhvr
--
Manuel Amador (Rudd-O)
http://rudd-o.com/
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Igor Hjelmstrom Vinhas Ribeiro
2012-03-14 14:21:45 UTC
Permalink
Manuel,

Thanks for your comment. I understand that this is risky and
because of this I do keep fresh (weekly updated and once-a-month
scrubbed) separate backups, always, just-in-case.

At any rate, I am happy to report I am using this setup (with
constant writing & reading) for almost 2 years now, and after several
crashes, out of memory errors in the machine, unexpected reboots and
even accidental zfs-fuse process killing (coupled with erasing the
zpool cache) zpool scrub still reports everything as ok.

Regards,
Igor.
Post by Manuel Amador
You do know that with that kind of layering, you have absolutely no data
safety guarantees from ZFS in case of a crash, right?
Hi!
   I would like that the l2arc be used as much as possible, because I
have a device with terabytes of data sitting on top of a
loopback-mounted luks-encrypted s3backer virtual file - which means
reads and writes are both slow, and a fast local cache device (l2arc)
in the same pool.
   One thing that it seems would help would be tuning the
http://forums.freebsd.org/showthread.php?t=29907
   Both seem to be fixed/hard coded in the source code for zfs-fuse
right now (I started by looking at
http://gitweb.zfs-fuse.net/?p=official;a=blob;f=src/lib/libzpool/arc.c;h=af9
067d361864c4d85c4e628f5729a50cc961a52;hb=HEAD ).
   Did anyone here did tests setting vfs.zfs.l2arc_noprefetch to false
instead so that streaming workloads will also be cached in l2arc, and
bumping up arc_write_max significantly? Is there any reason why it
could be a bad idea to do this (change both parameters)?
Regards,
igorhvr
--
   Manuel Amador (Rudd-O)
   http://rudd-o.com/
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Manuel Amador
2012-03-14 14:45:02 UTC
Permalink
That is excellent news.
Post by Igor Hjelmstrom Vinhas Ribeiro
Manuel,
Thanks for your comment. I understand that this is risky and
because of this I do keep fresh (weekly updated and once-a-month
scrubbed) separate backups, always, just-in-case.
At any rate, I am happy to report I am using this setup (with
constant writing & reading) for almost 2 years now, and after several
crashes, out of memory errors in the machine, unexpected reboots and
even accidental zfs-fuse process killing (coupled with erasing the
zpool cache) zpool scrub still reports everything as ok.
Regards,
Igor.
Post by Manuel Amador
You do know that with that kind of layering, you have absolutely no data
safety guarantees from ZFS in case of a crash, right?
Post by Igor Hjelmstrom Vinhas Ribeiro
Hi!
I would like that the l2arc be used as much as possible, because I
have a device with terabytes of data sitting on top of a
loopback-mounted luks-encrypted s3backer virtual file - which means
reads and writes are both slow, and a fast local cache device (l2arc)
in the same pool.
One thing that it seems would help would be tuning the
http://forums.freebsd.org/showthread.php?t=29907
Both seem to be fixed/hard coded in the source code for zfs-fuse
right now (I started by looking at
http://gitweb.zfs-fuse.net/?p=official;a=blob;f=src/lib/libzpool/arc.c;h=
af9 067d361864c4d85c4e628f5729a50cc961a52;hb=HEAD ).
Did anyone here did tests setting vfs.zfs.l2arc_noprefetch to false
instead so that streaming workloads will also be cached in l2arc, and
bumping up arc_write_max significantly? Is there any reason why it
could be a bad idea to do this (change both parameters)?
Regards,
igorhvr
--
Manuel Amador (Rudd-O)
http://rudd-o.com/
--
Manuel Amador (Rudd-O)
http://rudd-o.com/
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Igor Hjelmstrom Vinhas Ribeiro
2012-03-17 03:18:47 UTC
Permalink
All,

In case anyone now or in the future is curious about this, let me
report my results.

I went ahead and did the changes setting l2arc_noprefetch to false,
increasing l2arc_size and also l2arc_headroom, and things seem to be
working really smoothly - no crashes or any issue I could detect so
far, and the l2arc is being filled much faster now.

The changes are
https://github.com/igorhvr/zfs-fuse/commit/dd5c4c74eb1b1ac4f791431c68dd3d74bc72ea60
and https://github.com/igorhvr/zfs-fuse/commit/f3146dd0f3ac661626c7b799f4d6c5114eab9e73

Regards,
igorhvr
Post by Manuel Amador
That is excellent news.
Post by Igor Hjelmstrom Vinhas Ribeiro
Manuel,
   Thanks for your comment. I understand that this is risky and
because of this I do keep fresh (weekly updated and once-a-month
scrubbed) separate  backups, always, just-in-case.
   At any rate, I am happy to report I am using this setup (with
constant writing & reading) for almost 2 years now, and after several
crashes, out of memory errors in the machine, unexpected reboots and
even accidental zfs-fuse process killing (coupled with erasing the
zpool cache) zpool scrub still reports everything as ok.
Regards,
 Igor.
Post by Manuel Amador
You do know that with that kind of layering, you have absolutely no data
safety guarantees from ZFS in case of a crash, right?
On Wednesday, March 14, 2012 11:00:02 Igor Hjelmstrom Vinhas Ribeiro
Hi!
   I would like that the l2arc be used as much as possible, because I
have a device with terabytes of data sitting on top of a
loopback-mounted luks-encrypted s3backer virtual file - which means
reads and writes are both slow, and a fast local cache device (l2arc)
in the same pool.
   One thing that it seems would help would be tuning the
http://forums.freebsd.org/showthread.php?t=29907
   Both seem to be fixed/hard coded in the source code for zfs-fuse
right now (I started by looking at
http://gitweb.zfs-fuse.net/?p=official;a=blob;f=src/lib/libzpool/arc.c;h=
af9 067d361864c4d85c4e628f5729a50cc961a52;hb=HEAD ).
   Did anyone here did tests setting vfs.zfs.l2arc_noprefetch to false
instead so that streaming workloads will also be cached in l2arc, and
bumping up arc_write_max significantly? Is there any reason why it
could be a bad idea to do this (change both parameters)?
Regards,
igorhvr
--
   Manuel Amador (Rudd-O)
   http://rudd-o.com/
--
   Manuel Amador (Rudd-O)
   http://rudd-o.com/
--
To visit our Web site, click on http://zfs-fuse.net/
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Loading...