Discussion:
Generic atomics_cas_32 patch
(too old to reply)
Gordan Bobic
2014-02-19 11:02:10 UTC
Permalink
To get the latest 0.7.0 (20121023) version to work on ARM I wrote the
attached patch that provides generic atomics_cas_32() implementation -
essentially copied from atomics_cas_64 with the parameters and return
changed from 64-bit to 32-bit.

This fixes the FTBFS.

Please consider accepting this patch in the authoritative git tree (and
perhaps even tagging the 0.7.0 release).

I know the zfs-fuse is semi-abandoned, but it still has a number of
important uses such as 32-bit Linux platforms as well as being a really
handy fall-back recovery option that doesn't involve trying to import a
pool on a completely different OS.


Gordan
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Emmanuel Anne
2014-02-19 13:57:21 UTC
Permalink
Well I am not sure anyone still uses the repository, but it's pushed anyway
:
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary

But this one is still quite different from the official 0.7.0... !
Anyway it's free, if someone wants to open a github repository from this,
there is no pb.
Post by Gordan Bobic
To get the latest 0.7.0 (20121023) version to work on ARM I wrote the
attached patch that provides generic atomics_cas_32() implementation -
essentially copied from atomics_cas_64 with the parameters and return
changed from 64-bit to 32-bit.
This fixes the FTBFS.
Please consider accepting this patch in the authoritative git tree (and
perhaps even tagging the 0.7.0 release).
I know the zfs-fuse is semi-abandoned, but it still has a number of
important uses such as 32-bit Linux platforms as well as being a really
handy fall-back recovery option that doesn't involve trying to import a
pool on a completely different OS.
Gordan
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Gordan Bobic
2014-02-19 14:01:02 UTC
Permalink
My understanding was there there was no longer an "official" zfs-fuse, and
0.7.0 was never actually released. Or was it? Is there a more
official/authoritative zfs-fuse repository somewhere? I thought yours was
the only one with pool v26 support for a start.
Post by Emmanuel Anne
Well I am not sure anyone still uses the repository, but it's pushed
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
But this one is still quite different from the official 0.7.0... !
Anyway it's free, if someone wants to open a github repository from this,
there is no pb.
Post by Gordan Bobic
To get the latest 0.7.0 (20121023) version to work on ARM I wrote the
attached patch that provides generic atomics_cas_32() implementation -
essentially copied from atomics_cas_64 with the parameters and return
changed from 64-bit to 32-bit.
This fixes the FTBFS.
Please consider accepting this patch in the authoritative git tree (and
perhaps even tagging the 0.7.0 release).
I know the zfs-fuse is semi-abandoned, but it still has a number of
important uses such as 32-bit Linux platforms as well as being a really
handy fall-back recovery option that doesn't involve trying to import a
pool on a completely different OS.
Gordan
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups
"zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Emmanuel Anne
2014-02-20 18:18:58 UTC
Permalink
Yeah I guess that's the problem, but 0.7.0 was released, it has official
packages everwhere, but version 26 of the pool never made it to 0.7.0 and
that's the point where everything stopped for zfs-fuse.
Official page was on http://zfs-fuse.net/ but the site went down lately
since someone had to pay for it and it became useless to pay here.
The git repository from there is still up and it probably contains the
latest 0.7.0 official : http://zfs-fuse.sehe.nl/ (that's here that it was
built after all !).
Post by Gordan Bobic
My understanding was there there was no longer an "official" zfs-fuse, and
0.7.0 was never actually released. Or was it? Is there a more
official/authoritative zfs-fuse repository somewhere? I thought yours was
the only one with pool v26 support for a start.
Post by Emmanuel Anne
Well I am not sure anyone still uses the repository, but it's pushed
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
But this one is still quite different from the official 0.7.0... !
Anyway it's free, if someone wants to open a github repository from this,
there is no pb.
Post by Gordan Bobic
To get the latest 0.7.0 (20121023) version to work on ARM I wrote the
attached patch that provides generic atomics_cas_32() implementation -
essentially copied from atomics_cas_64 with the parameters and return
changed from 64-bit to 32-bit.
This fixes the FTBFS.
Please consider accepting this patch in the authoritative git tree (and
perhaps even tagging the 0.7.0 release).
I know the zfs-fuse is semi-abandoned, but it still has a number of
important uses such as 32-bit Linux platforms as well as being a really
handy fall-back recovery option that doesn't involve trying to import a
pool on a completely different OS.
Gordan
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google
Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups
"zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Gordan Bobic
2014-02-21 18:42:48 UTC
Permalink
Post by Emmanuel Anne
Yeah I guess that's the problem, but 0.7.0 was released, it has official
packages everwhere, but version 26 of the pool never made it to 0.7.0 and
that's the point where everything stopped for zfs-fuse.
Official page was on http://zfs-fuse.net/ but the site went down lately
since someone had to pay for it and it became useless to pay here.
The git repository from there is still up and it probably contains the
latest 0.7.0 official : http://zfs-fuse.sehe.nl/ (that's here that it was
built after all !).
I was under the impression that your pool v26 branch of post 0.7.0 code,
even if it wasn't tagged as such. Were there any coommits that went into
0.7.0 release that didn't make it into your v26 tree?

On a separate note I'm about to try attacking zfs-fuse with valgrind
because when doing zfs receive on a 2GHz ARM (single core ARMv5)I am only
getting about 25MB/s (large files), and similar effective throughput on a
scrub (50MB/s but that seems to be total across all disks, and I am running
a 4-disk RAIDZ2, so effective scrube speed is 25MB/s). The CPU usage is
split roughly 1/3 netcat and 2/3 zfs-fuse, and about 25% is showing up as
system, probably due to the fuse kernel/userspace context switching. Scrub
is purely CPU bound.

Still it would be nice to try to squeeze a little more performance out of
it - 25MB/s is about 1/8 of what the network subsystem on the machine (dual
gigabit ethernet apparently on the CPU die, not connected over PCIe) and
1/4 of what the disks controller (PCIe x1, so 120MB/s theoretical max)
should be able to handle. Having said that, I am not too hopeful - its not
like there is vectorization I could leverage in hardware for a 4x speedup,
and this CPU only has MD5 and SHA1 async offload in hardware via cryptodev,
so not useful for ZFS's hashes which, IIRC are fletcher and sha2.
Post by Emmanuel Anne
Post by Gordan Bobic
My understanding was there there was no longer an "official" zfs-fuse,
and 0.7.0 was never actually released. Or was it? Is there a more
official/authoritative zfs-fuse repository somewhere? I thought yours was
the only one with pool v26 support for a start.
Post by Emmanuel Anne
Well I am not sure anyone still uses the repository, but it's pushed
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
But this one is still quite different from the official 0.7.0... !
Anyway it's free, if someone wants to open a github repository from
this, there is no pb.
Post by Gordan Bobic
To get the latest 0.7.0 (20121023) version to work on ARM I wrote the
attached patch that provides generic atomics_cas_32() implementation -
essentially copied from atomics_cas_64 with the parameters and return
changed from 64-bit to 32-bit.
This fixes the FTBFS.
Please consider accepting this patch in the authoritative git tree (and
perhaps even tagging the 0.7.0 release).
I know the zfs-fuse is semi-abandoned, but it still has a number of
important uses such as 32-bit Linux platforms as well as being a really
handy fall-back recovery option that doesn't involve trying to import a
pool on a completely different OS.
Gordan
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google
Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google
Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups
"zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Emmanuel Anne
2014-02-21 21:42:37 UTC
Permalink
Post by Gordan Bobic
I was under the impression that your pool v26 branch of post 0.7.0 code,
even if it wasn't tagged as such. Were there any coommits that went into
0.7.0 release that didn't make it into your v26 tree?
Probably, mainly some packages fixes, but there is probably some stuff that
I didn't see. There was no mail for each package so I had to monitor the
other git repository to keep in sync, which I didin't do... !
Post by Gordan Bobic
On a separate note I'm about to try attacking zfs-fuse with valgrind
because when doing zfs receive on a 2GHz ARM (single core ARMv5)I am only
getting about 25MB/s (large files), and similar effective throughput on a
scrub (50MB/s but that seems to be total across all disks, and I am running
a 4-disk RAIDZ2, so effective scrube speed is 25MB/s). The CPU usage is
split roughly 1/3 netcat and 2/3 zfs-fuse, and about 25% is showing up as
system, probably due to the fuse kernel/userspace context switching. Scrub
is purely CPU bound.
Still it would be nice to try to squeeze a little more performance out of
it - 25MB/s is about 1/8 of what the network subsystem on the machine (dual
gigabit ethernet apparently on the CPU die, not connected over PCIe) and
1/4 of what the disks controller (PCIe x1, so 120MB/s theoretical max)
should be able to handle. Having said that, I am not too hopeful - its not
like there is vectorization I could leverage in hardware for a 4x speedup,
and this CPU only has MD5 and SHA1 async offload in hardware via cryptodev,
so not useful for ZFS's hashes which, IIRC are fletcher and sha2.
And good luck with valgrind, it makes the programs run very slowly while it
tests them, plus zfs code is extremely complex, but I guess you already
know that, in any case, you'll need luck !
Post by Gordan Bobic
Post by Gordan Bobic
Post by Gordan Bobic
My understanding was there there was no longer an "official" zfs-fuse,
and 0.7.0 was never actually released. Or was it? Is there a more
official/authoritative zfs-fuse repository somewhere? I thought yours was
the only one with pool v26 support for a start.
Post by Emmanuel Anne
Well I am not sure anyone still uses the repository, but it's pushed
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
But this one is still quite different from the official 0.7.0... !
Anyway it's free, if someone wants to open a github repository from
this, there is no pb.
Post by Gordan Bobic
To get the latest 0.7.0 (20121023) version to work on ARM I wrote the
attached patch that provides generic atomics_cas_32() implementation -
essentially copied from atomics_cas_64 with the parameters and return
changed from 64-bit to 32-bit.
This fixes the FTBFS.
Please consider accepting this patch in the authoritative git tree
(and perhaps even tagging the 0.7.0 release).
I know the zfs-fuse is semi-abandoned, but it still has a number of
important uses such as 32-bit Linux platforms as well as being a really
handy fall-back recovery option that doesn't involve trying to import a
pool on a completely different OS.
Gordan
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google
Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google
Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google
Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups
"zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Emmanuel Anne
2014-02-21 21:43:33 UTC
Permalink
There was no mail for each package so I had to monitor the other git
repository to keep in sync, which I didin't do... !
There was no mail for each commit !!!
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Gordan Bobic
2014-02-22 09:10:49 UTC
Permalink
Post by Gordan Bobic
I was under the impression that your pool v26 branch of post 0.7.0
code, even if it wasn't tagged as such. Were there any coommits that
went into 0.7.0 release that didn't make it into your v26 tree?
Probably, mainly some packages fixes, but there is probably some stuff
that I didn't see. There was no mail for each package so I had to
monitor the other git repository to keep in sync, which I didin't do... !
I see. Is there a list of commits you added to get pool v26 working (and
any other fixes you committed)? Is it a huge list? If not, I guess the
simplest way to reconcile the repositories might be to get the 0.7.0
release and merge/backport all of your extras into that.
Post by Gordan Bobic
On a separate note I'm about to try attacking zfs-fuse with valgrind
because when doing zfs receive on a 2GHz ARM (single core ARMv5)I am
only getting about 25MB/s (large files), and similar effective
throughput on a scrub (50MB/s but that seems to be total across all
disks, and I am running a 4-disk RAIDZ2, so effective scrube speed
is 25MB/s). The CPU usage is split roughly 1/3 netcat and 2/3
zfs-fuse, and about 25% is showing up as system, probably due to the
fuse kernel/userspace context switching. Scrub is purely CPU bound.
Still it would be nice to try to squeeze a little more performance
out of it - 25MB/s is about 1/8 of what the network subsystem on the
machine (dual gigabit ethernet apparently on the CPU die, not
connected over PCIe) and 1/4 of what the disks controller (PCIe x1,
so 120MB/s theoretical max) should be able to handle. Having said
that, I am not too hopeful - its not like there is vectorization I
could leverage in hardware for a 4x speedup, and this CPU only has
MD5 and SHA1 async offload in hardware via cryptodev, so not useful
for ZFS's hashes which, IIRC are fletcher and sha2.
And good luck with valgrind, it makes the programs run very slowly while
it tests them, plus zfs code is extremely complex, but I guess you
already know that, in any case, you'll need luck !
I'm mostly hoping to see which functions eat most CPU, and see if there
is some optimization I can apply that might give a decent boost, at
least on CPU-limited architectures like ARM.

Gordan
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Emmanuel Anne
2014-02-22 11:36:33 UTC
Permalink
About the merge of the repositories : in theory yes, in practice they
diverged in a lot of small details if I remember correctly, merging both
wouldn't be easy, but of course it's possible for those who are really
motivated !

About optimization : yes it was my way of thinking too, but from what I
remember zfs-fuse is not really cpu intensive, it spends most of its time
waiting for threads to wake up on conditions, and it's a damn mess of
threads !
As I said : good luck !
Post by Gordan Bobic
Post by Gordan Bobic
I was under the impression that your pool v26 branch of post 0.7.0
code, even if it wasn't tagged as such. Were there any coommits that
went into 0.7.0 release that didn't make it into your v26 tree?
Probably, mainly some packages fixes, but there is probably some stuff
that I didn't see. There was no mail for each package so I had to
monitor the other git repository to keep in sync, which I didin't do... !
I see. Is there a list of commits you added to get pool v26 working (and
any other fixes you committed)? Is it a huge list? If not, I guess the
simplest way to reconcile the repositories might be to get the 0.7.0
release and merge/backport all of your extras into that.
On a separate note I'm about to try attacking zfs-fuse with valgrind
Post by Gordan Bobic
because when doing zfs receive on a 2GHz ARM (single core ARMv5)I am
only getting about 25MB/s (large files), and similar effective
throughput on a scrub (50MB/s but that seems to be total across all
disks, and I am running a 4-disk RAIDZ2, so effective scrube speed
is 25MB/s). The CPU usage is split roughly 1/3 netcat and 2/3
zfs-fuse, and about 25% is showing up as system, probably due to the
fuse kernel/userspace context switching. Scrub is purely CPU bound.
Still it would be nice to try to squeeze a little more performance
out of it - 25MB/s is about 1/8 of what the network subsystem on the
machine (dual gigabit ethernet apparently on the CPU die, not
connected over PCIe) and 1/4 of what the disks controller (PCIe x1,
so 120MB/s theoretical max) should be able to handle. Having said
that, I am not too hopeful - its not like there is vectorization I
could leverage in hardware for a 4x speedup, and this CPU only has
MD5 and SHA1 async offload in hardware via cryptodev, so not useful
for ZFS's hashes which, IIRC are fletcher and sha2.
And good luck with valgrind, it makes the programs run very slowly while
it tests them, plus zfs code is extremely complex, but I guess you
already know that, in any case, you'll need luck !
I'm mostly hoping to see which functions eat most CPU, and see if there is
some optimization I can apply that might give a decent boost, at least on
CPU-limited architectures like ARM.
Gordan
--
--
To visit our Web site, click on http://zfs-fuse.net/
--- You received this message because you are subscribed to the Google
Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Gordan Bobic
2015-05-13 12:46:01 UTC
Permalink
Sorry to necropost, but is there anywhere I can get the "official" 0.7
release git code in a per-patch consumable format? I'm pondering looking at
merging the patches that didn't make it into the branch with v26 pool
support.

There is some drive toward use of zfs-fuse again since Debian seems to
intend to include support for ZFS as rootfs, and due to licensing there was
a suggestion to use zfs-fuse in the installer and switch to ZoL later on in
the process.

So merging things into a definitive latest version would probably be
helpful.

On side-note, having migrated all of my recent x86 machines to using ZFS
rootfs, I am pondering starting to do similar with my 32-bit machines
(mostly ARM) that use zfs-fuse, and that means adapting the dracut modules
from ZoL for zfs-fuse.

Gordan
Post by Emmanuel Anne
About the merge of the repositories : in theory yes, in practice they
diverged in a lot of small details if I remember correctly, merging both
wouldn't be easy, but of course it's possible for those who are really
motivated !
About optimization : yes it was my way of thinking too, but from what I
remember zfs-fuse is not really cpu intensive, it spends most of its time
waiting for threads to wake up on conditions, and it's a damn mess of
threads !
As I said : good luck !
Post by Gordan Bobic
Post by Gordan Bobic
I was under the impression that your pool v26 branch of post 0.7.0
code, even if it wasn't tagged as such. Were there any coommits that
went into 0.7.0 release that didn't make it into your v26 tree?
Probably, mainly some packages fixes, but there is probably some stuff
that I didn't see. There was no mail for each package so I had to
monitor the other git repository to keep in sync, which I didin't do... !
I see. Is there a list of commits you added to get pool v26 working (and
any other fixes you committed)? Is it a huge list? If not, I guess the
simplest way to reconcile the repositories might be to get the 0.7.0
release and merge/backport all of your extras into that.
On a separate note I'm about to try attacking zfs-fuse with valgrind
Post by Gordan Bobic
because when doing zfs receive on a 2GHz ARM (single core ARMv5)I am
only getting about 25MB/s (large files), and similar effective
throughput on a scrub (50MB/s but that seems to be total across all
disks, and I am running a 4-disk RAIDZ2, so effective scrube speed
is 25MB/s). The CPU usage is split roughly 1/3 netcat and 2/3
zfs-fuse, and about 25% is showing up as system, probably due to the
fuse kernel/userspace context switching. Scrub is purely CPU bound.
Still it would be nice to try to squeeze a little more performance
out of it - 25MB/s is about 1/8 of what the network subsystem on the
machine (dual gigabit ethernet apparently on the CPU die, not
connected over PCIe) and 1/4 of what the disks controller (PCIe x1,
so 120MB/s theoretical max) should be able to handle. Having said
that, I am not too hopeful - its not like there is vectorization I
could leverage in hardware for a 4x speedup, and this CPU only has
MD5 and SHA1 async offload in hardware via cryptodev, so not useful
for ZFS's hashes which, IIRC are fletcher and sha2.
And good luck with valgrind, it makes the programs run very slowly while
it tests them, plus zfs code is extremely complex, but I guess you
already know that, in any case, you'll need luck !
I'm mostly hoping to see which functions eat most CPU, and see if there
is some optimization I can apply that might give a decent boost, at least
on CPU-limited architectures like ARM.
Gordan
--
--
To visit our Web site, click on http://zfs-fuse.net/
--- You received this message because you are subscribed to the Google
Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To post to this group, send email to zfs-***@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
---
You received this message because you are subscribed to the Google Groups "zfs-fuse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-fuse+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...