df is really already ruined as soon as you use zfs :), I just filter out
any zfs entries from the output and it is ok :)
then use zfs list to complete the picture (which is easy to filter out
the mounted clones).
zfs is exciting :).
On 30/03/2012 2:39 PM, Emmanuel Anne wrote:
> If you really want to experiment with more than 1000 filesystems
> (which will definetely ruin the output of df and mount !), cbange the
> MAX_FILESYSTEMS define in zfs-fuse/fuse_listener.c and let me know if
> it works.
> You'll see my code is now incredibly faster handling all these
> filesystems, just hoping it won't create any side effect !
>
> 2012/3/30 Ryan How <rhow-***@public.gmane.org <mailto:rhow-***@public.gmane.org>>
>
> The snapshots are created from a backup script. But I wrote a test
> just to make sure my backup script would work and rotate the
> snapshots properly. At first it only took half a second for each
> snapshot / clone and then got longer and longer as it got over
> 200. I didn't notice any output from the zfs-fuse daemon, but I
> didn't look :). I'll take a look at your git version as soon as I
> get a chance and run my test again to see if I get better results.
> I'll keep an eye on thread counts and open file descriptors, etc...
>
> Thanks!
>
>
> On 30/03/2012 5:50 AM, Emmanuel Anne wrote:
>> I just added 2 commits to my git repository to have much faster
>> mounts (this one is safe, it just removes a very old sync() call
>> and should be 100% safe to use), and much faster unmounts (this
>> one seems to work, which is surprising because I remember we had
>> added these sync() calls to prevent zpool export to fail when
>> having a tree of subvolumes. Well I tried to make it fail, and it
>> worked all the time, and the sync calls are just replaced by an
>> empty loop when needed !). Anyway it makes handling 1000 clones
>> much much faster and much more reasonable !
>>
>> 2012/3/29 Emmanuel Anne <emmanuel.anne-***@public.gmane.org
>> <mailto:emmanuel.anne-***@public.gmane.org>>
>>
>> I was curious to see this bug in action, so I tried to
>> reproduce it.
>> Well it took forever to create the snapshots, and then to
>> clone them. I did it on a ramdisk to speed things up, but the
>> clone operation creates a new filesystem and it isn't immediate.
>> Oh well, I reached the end finally, wondering how you could
>> create and manage so many snapshots at the same time.
>> Well it works for me.
>> I mean, when I reach 1000, the daemon displays "Warning:
>> filesystem limit (1000) reached, unmounting.." which is
>> probably lost for you since you run it from the init.d script.
>> Anyway I had created 1002 clones, only clones 1 through 999
>> are created, after this they are cleanly unmounted and the
>> dir remains empty.
>>
>> Normal file operations / zfs / zpool commands still work.
>>
>> By the way this limit is from a define (MAX_FILESYSTEMS), so
>> it can be changed to 10000 if you like, it should work until
>> 32767 because it doesn't create a new thread/filesystem
>> finally, it just creates a new socket (for fuse).
>> Of course this isn't from 0.7.0, it's from my git version,
>> but normally there shouldn't be any difference in 0.7.0 on
>> this point.
>>
>>
>> 2012/3/29 Kartweel <rhow-***@public.gmane.org
>> <mailto:rhow-***@public.gmane.org>>
>>
>> Just thinking, maybe put a limit on the number of mounted
>> file systems? (if it is possible), so it doesn't run out
>> of threads and completely freeze up? Coz they are mounted
>> on startup, it makes it a bit more nasty. And it is very
>> easy to run into if you have a few scripts getting
>> excited making snapshots and mounting them.
>>
>>
>> On Wednesday, 28 March 2012 08:24:13 UTC+8, Kartweel wrote:
>>
>> Thanks,
>>
>>
>> On 28/03/2012 2:12 AM, sgheeren wrote:
>> > Yes, there are known problems with a lot of clones
>> mounted (but that
>> > has nothing to do with your previous quote). It has
>> to do with the
>> > threading model for the fuselistener (every FS gets
>> a thread). The
>> > thread allocation runs out. IME things just fail
>> after exceeding the
>> > max. number of available threads, but I can imagine
>> other systems
>> > experiencing other difficulties. Seth
>>
>> That seems to explain it. Thread count of zfs-fuse
>> process is 1116,
>> prolly a bit too high eh :)
>>
>> I've been using zfs-fuse for a while now and haven't
>> had any issues
>> (apart from this), and zfs on linux looked quite new
>> (although looks
>> like it has made a lot of steps recently!), so I
>> might give it a go when
>> it seems to have stabilized a bit.
>>
>> For now I'll just drop the dream of keeping lots of
>> snapshots :).
>> Previously I was hard-linking and copying changed
>> files, but with
>> de-duplication it didn't use extra space. I thought
>> snapshots would be a
>> far more efficient method, but it seems it has some
>> limits....
>>
>> --
>> To post to this group, send email to
>> zfs-fuse-/***@public.gmane.org <mailto:zfs-fuse-/***@public.gmane.org>
>> To visit our Web site, click on http://zfs-fuse.net/
>>
>>
>>
>>
>> --
>> my zfs-fuse git repository :
>> http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
>>
>>
>>
>>
>> --
>> my zfs-fuse git repository :
>> http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
>> --
>> To post to this group, send email to zfs-fuse-/***@public.gmane.org
>> <mailto:zfs-fuse-/***@public.gmane.org>
>> To visit our Web site, click on http://zfs-fuse.net/
> --
> To post to this group, send email to zfs-fuse-/***@public.gmane.org
> <mailto:zfs-fuse-/***@public.gmane.org>
> To visit our Web site, click on http://zfs-fuse.net/
>
>
>
>
> --
> my zfs-fuse git repository :
> http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
> --
> To post to this group, send email to zfs-fuse-/***@public.gmane.org
> To visit our Web site, click on http://zfs-fuse.net/
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/