Discussion:
zfs-fuse gone crazy! won't export, won't destroy, won't let go of disks
(too old to reply)
VL
2013-01-06 02:27:19 UTC
Permalink
tl;dr zfs-fuse wont export nor destory my pool claiming 'busy', even when
it's disks are used by mdadm raid!

So I had 4x3tb drives that I created a raidz pool on it for some temporary
testing. When done, I was unable to export or destroy my pool

# zpool export -f rstoreb1
cannot export 'rstoreb1': pool is busy
# zpool destroy -f rstoreb1
cannot destroy 'rstoreb1': pool is busy

I can find nothing using it with lsof -n nor with ps -ef nor with fuser.

At this point I'm annoyed, and stop it with /etc/init.d/zfs-fuse stop

I partition the 4x3tb drives with parted, and create my raid with mdadm,
and then create filesystem with mkfs.ext4, and mounted,

At this point mdadm is creating the raid, the new raid is mounted and I can
copy files to it just fine.

I do a final reboot to make sure everything boots well and mounts well ...
and low and behold ... zfs-fuse starts with boot, and mounts rstoreb1 again
!

/dev/md127 7.9T 1.7G 7.9T 1% /extra/mdstore2
rstoreb1 8.1T 8.0T 89G 99% /extra/rstoreb1

these two raids are using the same hard drives !!

zpool stays it's healthy (i copied a 1gb file to the md127 raid device that
shares the same disks with zfs raidz)

# zpool status
pool: rstoreb1
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Sat Jan 5 17:55:15
2013
config:

NAME STATE READ
WRITE CKSUM
rstoreb1 ONLINE 0
0 0
raidz1-0 ONLINE 0
0 0
disk/by-id/wwn-0x5000c5004e5454fe ONLINE 0
0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16Q3S ONLINE 0
0 0 10K resilvered
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QB0 ONLINE 0
0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QDX ONLINE 0
0 0

errors: No known data errors

at this point I am confused. I like fuse-zfs and I want to (need to)
continue using it with my external e-sata drives but this internal set of
disks I want mdadm, and zfs just won't let go.

How can I coax zfs to nicely let go of this volume?

# dpkg -l | grep zfs
ii zfs-fuse
0.6.9-1build1 ZFS on FUSE

I'm running Ubuntu 12.04.1 LTS.

note: I had uninstalled zfs-kernel-server at one point, and reading that it
might be necessary I just reinstalled it (didn't start it), but I still
can't destroy or export pool rstoreb1.

Any ideas? Thanks so much !!

vl
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Gavin Chappell
2013-01-06 10:14:44 UTC
Permalink
The way I'd do it is export as many pools as I could (including your
external disks), stop zfs-fuse, and remove the zpool.cache file (I think it
lives in /etc/zfs, but it's been a while since I used zfs-fuse so not 100%
sure) then restart zfs-fuse and import just the pools you need. I think the
problem stems from the fact partitioning the disks with parted doesn't
overwrite all the zfs metadata on the disks, so if the zfs.cache file still
has those disks in there, theres enough metadata that they can technically
still be mounted.

Alternatively, you may find that you need to quickly overwrite the start
and end of each disk with dd to clear the zfs metadata off, but I don't
know where it's located or how big it is off the top of my head. It's
almost certainly come up before on either the zfs-fuse or zfs-discuss
Google Group though, so have a look through the archives and you should
find the info you need.
Post by VL
tl;dr zfs-fuse wont export nor destory my pool claiming 'busy', even when
it's disks are used by mdadm raid!
So I had 4x3tb drives that I created a raidz pool on it for some temporary
testing. When done, I was unable to export or destroy my pool
# zpool export -f rstoreb1
cannot export 'rstoreb1': pool is busy
# zpool destroy -f rstoreb1
cannot destroy 'rstoreb1': pool is busy
I can find nothing using it with lsof -n nor with ps -ef nor with fuser.
At this point I'm annoyed, and stop it with /etc/init.d/zfs-fuse stop
I partition the 4x3tb drives with parted, and create my raid with mdadm,
and then create filesystem with mkfs.ext4, and mounted,
At this point mdadm is creating the raid, the new raid is mounted and I
can copy files to it just fine.
I do a final reboot to make sure everything boots well and mounts well ...
and low and behold ... zfs-fuse starts with boot, and mounts rstoreb1 again
!
/dev/md127 7.9T 1.7G 7.9T 1% /extra/mdstore2
rstoreb1 8.1T 8.0T 89G 99% /extra/rstoreb1
these two raids are using the same hard drives !!
zpool stays it's healthy (i copied a 1gb file to the md127 raid device
that shares the same disks with zfs raidz)
# zpool status
pool: rstoreb1
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Sat Jan 5 17:55:15
2013
NAME STATE READ
WRITE CKSUM
rstoreb1 ONLINE 0
0 0
raidz1-0 ONLINE 0
0 0
disk/by-id/wwn-0x5000c5004e5454fe ONLINE 0
0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16Q3S ONLINE 0
0 0 10K resilvered
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QB0 ONLINE 0
0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QDX ONLINE 0
0 0
errors: No known data errors
at this point I am confused. I like fuse-zfs and I want to (need to)
continue using it with my external e-sata drives but this internal set of
disks I want mdadm, and zfs just won't let go.
How can I coax zfs to nicely let go of this volume?
# dpkg -l | grep zfs
ii zfs-fuse
0.6.9-1build1 ZFS on FUSE
I'm running Ubuntu 12.04.1 LTS.
note: I had uninstalled zfs-kernel-server at one point, and reading that
it might be necessary I just reinstalled it (didn't start it), but I still
can't destroy or export pool rstoreb1.
Any ideas? Thanks so much !!
vl
--
To visit our Web site, click on http://zfs-fuse.net/
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Emmanuel Anne
2013-01-06 10:44:08 UTC
Permalink
Yes good advice, removing this cache is equivalent to a forced export.
The returning volume is perfectly normal, the filesystem info is not at the
same place for ext4/zfs, just don't run a scrub on it now.
The always busy state is quite weird, it's either a process in the path of
this drive, or a bug of an old version which had some trouble to export a
root filesystem if it had too many sub filesystems and in this case you had
to start by unmounting the sub filesystems (but it's very unlikely, this
bug was fixed a long time ago and I never heard it returned).
Post by Gavin Chappell
The way I'd do it is export as many pools as I could (including your
external disks), stop zfs-fuse, and remove the zpool.cache file (I think it
lives in /etc/zfs, but it's been a while since I used zfs-fuse so not 100%
sure) then restart zfs-fuse and import just the pools you need. I think the
problem stems from the fact partitioning the disks with parted doesn't
overwrite all the zfs metadata on the disks, so if the zfs.cache file still
has those disks in there, theres enough metadata that they can technically
still be mounted.
Alternatively, you may find that you need to quickly overwrite the start
and end of each disk with dd to clear the zfs metadata off, but I don't
know where it's located or how big it is off the top of my head. It's
almost certainly come up before on either the zfs-fuse or zfs-discuss
Google Group though, so have a look through the archives and you should
find the info you need.
Post by VL
tl;dr zfs-fuse wont export nor destory my pool claiming 'busy', even when
it's disks are used by mdadm raid!
So I had 4x3tb drives that I created a raidz pool on it for some
temporary testing. When done, I was unable to export or destroy my pool
# zpool export -f rstoreb1
cannot export 'rstoreb1': pool is busy
# zpool destroy -f rstoreb1
cannot destroy 'rstoreb1': pool is busy
I can find nothing using it with lsof -n nor with ps -ef nor with fuser.
At this point I'm annoyed, and stop it with /etc/init.d/zfs-fuse stop
I partition the 4x3tb drives with parted, and create my raid with mdadm,
and then create filesystem with mkfs.ext4, and mounted,
At this point mdadm is creating the raid, the new raid is mounted and I
can copy files to it just fine.
I do a final reboot to make sure everything boots well and mounts well
... and low and behold ... zfs-fuse starts with boot, and mounts rstoreb1
again !
/dev/md127 7.9T 1.7G 7.9T 1% /extra/mdstore2
rstoreb1 8.1T 8.0T 89G 99% /extra/rstoreb1
these two raids are using the same hard drives !!
zpool stays it's healthy (i copied a 1gb file to the md127 raid device
that shares the same disks with zfs raidz)
# zpool status
pool: rstoreb1
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Sat Jan 5
17:55:15 2013
NAME STATE READ
WRITE CKSUM
rstoreb1 ONLINE
0 0 0
raidz1-0 ONLINE
0 0 0
disk/by-id/wwn-0x5000c5004e5454fe ONLINE
0 0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16Q3S ONLINE
0 0 0 10K resilvered
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QB0 ONLINE
0 0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QDX ONLINE
0 0 0
errors: No known data errors
at this point I am confused. I like fuse-zfs and I want to (need to)
continue using it with my external e-sata drives but this internal set of
disks I want mdadm, and zfs just won't let go.
How can I coax zfs to nicely let go of this volume?
# dpkg -l | grep zfs
ii zfs-fuse
0.6.9-1build1 ZFS on FUSE
I'm running Ubuntu 12.04.1 LTS.
note: I had uninstalled zfs-kernel-server at one point, and reading that
it might be necessary I just reinstalled it (didn't start it), but I still
can't destroy or export pool rstoreb1.
Any ideas? Thanks so much !!
vl
--
To visit our Web site, click on http://zfs-fuse.net/
--
To visit our Web site, click on http://zfs-fuse.net/
--
my zfs-fuse git repository :
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
VL
2013-01-06 11:37:07 UTC
Permalink
Thanks for that advice, I will try removing the cache. I had thought
myself that the next step would lead to using dd to wipe the first part of
the drives, just as you suggested.

As for the busy part -- it does indeed seem weird that it won't unmount
with fuser / lsof / ps don't show any processes using it. I was thinking
of not using the ubuntu package and downloading the most recent version
direct from the zfs-fuse website and compiling.

Thanks again !
Post by Emmanuel Anne
Yes good advice, removing this cache is equivalent to a forced export.
The returning volume is perfectly normal, the filesystem info is not at
the same place for ext4/zfs, just don't run a scrub on it now.
The always busy state is quite weird, it's either a process in the path of
this drive, or a bug of an old version which had some trouble to export a
root filesystem if it had too many sub filesystems and in this case you had
to start by unmounting the sub filesystems (but it's very unlikely, this
bug was fixed a long time ago and I never heard it returned).
Post by Gavin Chappell
The way I'd do it is export as many pools as I could (including your
external disks), stop zfs-fuse, and remove the zpool.cache file (I think it
lives in /etc/zfs, but it's been a while since I used zfs-fuse so not 100%
sure) then restart zfs-fuse and import just the pools you need. I think the
problem stems from the fact partitioning the disks with parted doesn't
overwrite all the zfs metadata on the disks, so if the zfs.cache file still
has those disks in there, theres enough metadata that they can technically
still be mounted.
Alternatively, you may find that you need to quickly overwrite the start
and end of each disk with dd to clear the zfs metadata off, but I don't
know where it's located or how big it is off the top of my head. It's
almost certainly come up before on either the zfs-fuse or zfs-discuss
Google Group though, so have a look through the archives and you should
find the info you need.
Post by VL
tl;dr zfs-fuse wont export nor destory my pool claiming 'busy', even
when it's disks are used by mdadm raid!
So I had 4x3tb drives that I created a raidz pool on it for some
temporary testing. When done, I was unable to export or destroy my pool
# zpool export -f rstoreb1
cannot export 'rstoreb1': pool is busy
# zpool destroy -f rstoreb1
cannot destroy 'rstoreb1': pool is busy
I can find nothing using it with lsof -n nor with ps -ef nor with fuser.
At this point I'm annoyed, and stop it with /etc/init.d/zfs-fuse stop
I partition the 4x3tb drives with parted, and create my raid with mdadm,
and then create filesystem with mkfs.ext4, and mounted,
At this point mdadm is creating the raid, the new raid is mounted and I
can copy files to it just fine.
I do a final reboot to make sure everything boots well and mounts well
... and low and behold ... zfs-fuse starts with boot, and mounts rstoreb1
again !
/dev/md127 7.9T 1.7G 7.9T 1% /extra/mdstore2
rstoreb1 8.1T 8.0T 89G 99% /extra/rstoreb1
these two raids are using the same hard drives !!
zpool stays it's healthy (i copied a 1gb file to the md127 raid device
that shares the same disks with zfs raidz)
# zpool status
pool: rstoreb1
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Sat Jan 5
17:55:15 2013
NAME STATE READ
WRITE CKSUM
rstoreb1 ONLINE
0 0 0
raidz1-0 ONLINE
0 0 0
disk/by-id/wwn-0x5000c5004e5454fe ONLINE
0 0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16Q3S ONLINE
0 0 0 10K resilvered
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QB0 ONLINE
0 0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QDX ONLINE
0 0 0
errors: No known data errors
at this point I am confused. I like fuse-zfs and I want to (need to)
continue using it with my external e-sata drives but this internal set of
disks I want mdadm, and zfs just won't let go.
How can I coax zfs to nicely let go of this volume?
# dpkg -l | grep zfs
ii zfs-fuse
0.6.9-1build1 ZFS on FUSE
I'm running Ubuntu 12.04.1 LTS.
note: I had uninstalled zfs-kernel-server at one point, and reading that
it might be necessary I just reinstalled it (didn't start it), but I still
can't destroy or export pool rstoreb1.
Any ideas? Thanks so much !!
vl
--
To visit our Web site, click on http://zfs-fuse.net/
--
To visit our Web site, click on http://zfs-fuse.net/
--
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
VL
2013-01-06 11:43:48 UTC
Permalink
FYI on unbuntu the cache is /var/lib/zfs/zpool.cache

Indeed, stopping zfs, rm'ing that, and restarting did indeed have the
effect of exporting it. Thanks!

However, "zpool import" (while I will do a lot) still shows that as able to
import, so I think I'm going to use the dd method to overwrite the metadata.

Thanks again !
Post by VL
tl;dr zfs-fuse wont export nor destory my pool claiming 'busy', even when
it's disks are used by mdadm raid!
So I had 4x3tb drives that I created a raidz pool on it for some temporary
testing. When done, I was unable to export or destroy my pool
# zpool export -f rstoreb1
cannot export 'rstoreb1': pool is busy
# zpool destroy -f rstoreb1
cannot destroy 'rstoreb1': pool is busy
I can find nothing using it with lsof -n nor with ps -ef nor with fuser.
At this point I'm annoyed, and stop it with /etc/init.d/zfs-fuse stop
I partition the 4x3tb drives with parted, and create my raid with mdadm,
and then create filesystem with mkfs.ext4, and mounted,
At this point mdadm is creating the raid, the new raid is mounted and I
can copy files to it just fine.
I do a final reboot to make sure everything boots well and mounts well ...
and low and behold ... zfs-fuse starts with boot, and mounts rstoreb1 again
!
/dev/md127 7.9T 1.7G 7.9T 1% /extra/mdstore2
rstoreb1 8.1T 8.0T 89G 99% /extra/rstoreb1
these two raids are using the same hard drives !!
zpool stays it's healthy (i copied a 1gb file to the md127 raid device
that shares the same disks with zfs raidz)
# zpool status
pool: rstoreb1
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Sat Jan 5 17:55:15
2013
NAME STATE READ
WRITE CKSUM
rstoreb1 ONLINE 0
0 0
raidz1-0 ONLINE 0
0 0
disk/by-id/wwn-0x5000c5004e5454fe ONLINE 0
0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16Q3S ONLINE 0
0 0 10K resilvered
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QB0 ONLINE 0
0 0
disk/by-id/ata-ST3000DM001-1CH166_Z1F16QDX ONLINE 0
0 0
errors: No known data errors
at this point I am confused. I like fuse-zfs and I want to (need to)
continue using it with my external e-sata drives but this internal set of
disks I want mdadm, and zfs just won't let go.
How can I coax zfs to nicely let go of this volume?
# dpkg -l | grep zfs
ii zfs-fuse
0.6.9-1build1 ZFS on FUSE
I'm running Ubuntu 12.04.1 LTS.
note: I had uninstalled zfs-kernel-server at one point, and reading that
it might be necessary I just reinstalled it (didn't start it), but I still
can't destroy or export pool rstoreb1.
Any ideas? Thanks so much !!
vl
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
VL
2013-01-07 09:28:03 UTC
Permalink
Just to follow up to closure ...

Indeed, repartitioning and doing a newfs did not overlap where zfs writes
it's metadata. The only real problem here is with ZFS and the inability to
'export' something, claiming it's busy, even when it is most certainly not
busy. (many reboots, restarts, etc, fuser/lsof/etc). After reading, I
think it's an issue/bug with the version of zfs I'm using.

[aside] Unfortunately I still use zfs with external esata. Archiving
media off to dvd-r became too slow and too small, so I started getting
cheap 500gb HD and creating external (esata) raidz (raid5) using zfs.
1.5tb useable. When it was filled, I'd catalog it, label the drives
(raidset01, raidset02, etc) and put them on the shelf and get 4 new
drives. Zfs works amazing well for this. [/aside]

What I finally did was
* stop zfs daemon
* using dd, overwrite first 10mb and last 10mb of each disk (if=/dev/zero,
etc)
* start zfs daemon; no metadata signature, no problem.
* re-partition, re-init mdadm raid, re-newfs, all good

So I learned some stuff in the process and I'm back in action.

Thanks again everyone fot the help!!

vl
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Loading...