Discussion:
Hi, Is this the best place to ask questions about ZFS-fuse and ZFS optimization?
(too old to reply)
astroboy589
2011-08-03 02:15:04 UTC
Permalink
Hi All,

I've been playing around with ZFS for a while and i want to get your
guys opinion on my setup of ZFS.

If this is not the ideal place to ask these kind of questions, can you
direct me to a place i can ask.

Thanks!
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Jack Sparrow
2011-08-05 23:47:57 UTC
Permalink
Post by astroboy589
I've been playing around with ZFS for a while and i want to get your
guys opinion on my setup of ZFS.
Yep, ask away.
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
astroboy589
2011-08-06 10:46:36 UTC
Permalink
Hey,

I've got Ubuntu 10.10 running ZFS-Fuse.

ZFS manages my 6x 2TB Hard Drives in Raidz2. It's Mounted on the /
named storage.

These hard drives are consumer quality (7.2k speed).

It all adds up to about 7TBs that gets shared out by a Samba share.

I have a Cron Job running a sudo based zpool scrub storage, to
hopefully alert me to when my hard drives are going out (die).

Is this ideal? I'm worried about when a hard drive dies how to rebuild
the array again when I have to go down to 5 hard drives? Then how to
add another hard drive back into the array? Plus could make the new
6th hard drive a 3TB hard drive.

any advice would be great i have a gut feeling this setup is not
ideal.
Post by Jack Sparrow
Post by astroboy589
I've been playing around with ZFS for a while and i want to get your
guys opinion on my setup of ZFS.
Yep, ask away.
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Emmanuel Anne
2011-08-06 11:53:38 UTC
Permalink
Extending the size of a raid setup is quite hard, if you just change 1 disk
making it bigger, the new space will just be ignored because all the disks
need to be of the same size.

Anyway, to replace when it begins failing, just use zpool replace once it's
unavailable, you can eventually use zpool offline first when you unplug it.

To monitor the disks smartctl can be good too, but some disks can fail
suddenly sometimes for an outside reason and in this case it's hard to be
prepared (the big problem here is what happens if all the disks fail at the
same time, or if the controller fails).
Post by astroboy589
Hey,
I've got Ubuntu 10.10 running ZFS-Fuse.
ZFS manages my 6x 2TB Hard Drives in Raidz2. It's Mounted on the /
named storage.
These hard drives are consumer quality (7.2k speed).
It all adds up to about 7TBs that gets shared out by a Samba share.
I have a Cron Job running a sudo based zpool scrub storage, to
hopefully alert me to when my hard drives are going out (die).
Is this ideal? I'm worried about when a hard drive dies how to rebuild
the array again when I have to go down to 5 hard drives? Then how to
add another hard drive back into the array? Plus could make the new
6th hard drive a 3TB hard drive.
any advice would be great i have a gut feeling this setup is not
ideal.
Post by Jack Sparrow
Post by astroboy589
I've been playing around with ZFS for a while and i want to get your
guys opinion on my setup of ZFS.
Yep, ask away.
--
To visit our Web site, click on http://zfs-fuse.net/
--
my zfs-fuse git repository :
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2011-08-06 19:12:10 UTC
Permalink
Post by Emmanuel Anne
Anyway, to replace when it begins failing, just use zpool replace once
it's unavailable, you can eventually use zpool offline first when you
unplug it.
Under contrib find the sample script zfs_pool_alert. This script (the
sample is in perl) can be adapted to your needs and put into
/etc/zfs/zfs_pool_alert; It will then get called whenever a vdev changes
it state ((un)available and on/offline).
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
astroboy589
2011-08-06 23:11:34 UTC
Permalink
Hey,

I'm pretty sure i sent this before but its not appearing here so I'm
re-typing it.

Got do I get this Script? Is there a Link?

Do I have to use something in a MAN CONTRIB?

Basically how do i install it? if i need it.

I'm assuming that you, your reference to the vdev's are in my case my
6HDDs.

Thanks!

XD
On 08/06/2011 01:53 PM, Emmanuel Anne wrote:> Anyway, to replace when it begins failing, just use zpool replace once
Post by Emmanuel Anne
it's unavailable, you can eventually use zpool offline first when you
unplug it.
Under contrib find the sample script zfs_pool_alert. This script (the
sample is in perl) can be adapted to your needs and put into
/etc/zfs/zfs_pool_alert; It will then get called whenever a vdev changes
it state ((un)available and on/offline).
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2011-08-07 13:52:42 UTC
Permalink
Post by astroboy589
Hey,
I'm pretty sure i sent this before but its not appearing here so I'm
re-typing it.
Got do I get this Script? Is there a Link?
If you just want to do manual maintenance of pools, there is no use in
getting the script. The script is there to allow automatic admin actions
on vdev failures (such as syslogging, email, paging and of course
automatic replacement with a spare).

That said, here is the link (it is also in any tarball downloadable from
the site):

<http://gitweb.zfs-fuse.net/?p=official;a=tree;hb=maint>
http://gitweb.zfs-fuse.net/?p=official;a=tree;f=contrib;hb=maint
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Daniel Smedegaard Buus
2011-08-11 08:49:15 UTC
Permalink
Post by astroboy589
Hey,
I'm pretty sure i sent this before but its not appearing here so I'm
re-typing it.
Got do I get this Script? Is there a Link?
Seth, correct me if I'm wrong here, but isn't this script already
included with your out-of-the-box sudo apt-get install fuse? I
particularly don't remember manually installing it, and I've used it
on my rig just the same?
Post by astroboy589
Do I have to use something in a MAN CONTRIB?
Basically how do i install it? if i need it.
Since you have a GMail account, you might want to set up postfix to
automatically deliver your email to your GMail account - this also
works if you want to monitor your drives' SMART values using smartmon.
I made an addendum and a link to a guide on my blog:
http://danielsmedegaardbuus.dk/2010-10-10/setting-up-postfix-to-use-gmails-smtp-server-for-sending-mail/
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Ryan Dlugosz
2011-08-11 12:29:58 UTC
Permalink
Post by sgheeren
Under contrib find the sample script zfs_pool_alert. This script (the
sample is in perl) can be adapted to your needs and put into
/etc/zfs/zfs_pool_alert; It will then get called whenever a vdev changes
it state ((un)available and on/offline).
What is the proper way to test this?

After editing the script to point at the name of my pool I set one of the
drives to offline. I didn't see any email & looking at the logs seems to
indicate that there was no attempt made to send one... zpool status
confirmed that the pool was degraded.
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2011-08-11 16:44:30 UTC
Permalink
Post by sgheeren
Under contrib find the sample script zfs_pool_alert. This script (the
sample is in perl) can be adapted to your needs and put into
/etc/zfs/zfs_pool_alert; It will then get called whenever a vdev changes
it state ((un)available and on/offline).
What is the proper way to test this?
After editing the script to point at the name of my pool I set one of
the drives to offline. I didn't see any email & looking at the logs
seems to indicate that there was no attempt made to send one... zpool
status confirmed that the pool was degraded.
--
To visit our Web site, click on http://zfs-fuse.net/
I'm sure you can use a USB device, and simply unplug it
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
astroboy589
2011-08-06 23:06:53 UTC
Permalink
Hi,

Ok, then how would I go about moving all the 6 HDDs eventually
(preferably as each HDD dies) to a 6x 3TB setup that would eventually
use the additional space I've created.

Also my Raid controller in this case, is actually my Motherboard
(directly plugged into the on board sata ports). Another question that
I have is...

When my Motherboard dies how do i keep the raid safe... IE. do I have
to go out and buy another motherboard that is the same as my current
one?
Or can i get a new type of motherboard that has 6 sata ports and then
re-install my OS back onto the 7 HDD (its only used for my Ubuntu OS).
Then do i install ZFS-Fuse again and the array will magically appear?

If it doesn't magically appear then how do i get the new OS to
recognize the old Zpool I created.

Thanks Alot!

XD
Post by Emmanuel Anne
Extending the size of a raid setup is quite hard, if you just change 1 disk
making it bigger, the new space will just be ignored because all the disks
need to be of the same size.
Anyway, to replace when it begins failing, just use zpool replace once it's
unavailable, you can eventually use zpool offline first when you unplug it.
To monitor the disks smartctl can be good too, but some disks can fail
suddenly sometimes for an outside reason and in this case it's hard to be
prepared (the big problem here is what happens if all the disks fail at the
same time, or if the controller fails).
Post by astroboy589
Hey,
I've got Ubuntu 10.10 running ZFS-Fuse.
ZFS manages my 6x 2TB Hard Drives in Raidz2. It's Mounted on the /
named storage.
These hard drives are consumer quality (7.2k speed).
It all adds up to about 7TBs that gets shared out by a Samba share.
I have a Cron Job running  a sudo based zpool scrub storage, to
hopefully alert me to when my hard drives are going out (die).
Is this ideal? I'm worried about when a hard drive dies how to rebuild
the array again when I have to go down to 5 hard drives? Then how to
add another hard drive back into the array? Plus could make the new
6th hard drive a 3TB hard drive.
any advice would be great i have a gut feeling this setup is not
ideal.
Post by Jack Sparrow
Post by astroboy589
I've been playing around with ZFS for a while and i want to get your
guys opinion on my setup of ZFS.
Yep, ask away.
--
To visit our Web site, click onhttp://zfs-fuse.net/
--
my zfs-fuse git repository :http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Daniel Smedegaard Buus
2011-08-11 08:56:00 UTC
Permalink
Post by astroboy589
Hi,
Ok, then how would I go about moving all the 6 HDDs eventually
(preferably as each HDD dies) to a 6x 3TB setup that would eventually
use the additional space I've created.
If you have your six 2TB drives and your 6 3TB drives, you can just
use dd to mirror the smaller drives onto the larger ones, and then use
a partitioning tool to resize the 2TB partitions on the 3TB drives to
fill the entire space. ZFS will then automatically expand when you
import the pool afterwards.
Post by astroboy589
Also my Raid controller in this case, is actually my Motherboard
(directly plugged into the on board sata ports). Another question that
I have is...
When my Motherboard dies how do i keep the raid safe... IE. do I have
to go out and buy another motherboard that is the same as my current
one?
No, you can use any combination of hardware to attach your drives and
import your pool. USB, Firewire, SATA, on-board, expansion cards,
SCSI, IDE, it doesn't matter - if the drives are visible in Linux, ZFS
will import your pool. It doesn't matter if the paths or names of your
devices change, so long as what they're pointing to are your drives :)
Post by astroboy589
Or can i get a new type of motherboard that has 6 sata ports and then
re-install my OS back onto the 7 HDD (its only used for my Ubuntu OS).
Then do i install ZFS-Fuse again and the array will magically appear?
No, you will have to do sudo zpool import storage, perhaps zpool
import -f storage, in case you forgot to export the pool on the old
system, or ZFS will complain and say, "hm, looks like this pool
belongs to another system. you sure you want to import it here?". But
that's a one-time thing, after that it'll be there every time you
power on.

Good luck :) ZFS is awesome, as is ZFS-FUSE, as is this community!
Welcome aboard :)
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2011-08-11 16:42:55 UTC
Permalink
Post by Daniel Smedegaard Buus
If you have your six 2TB drives and your 6 3TB drives, you can just
use dd to mirror the smaller drives onto the larger ones, and then use
a partitioning tool to resize the 2TB partitions on the 3TB drives to
fill the entire space. ZFS will then automatically expand when you
import the pool afterwards.
Why would anyone use dd -- and loose the protection by checksumming

You could easily

zpool attach poolname /dev/old /dev/new

Wait for resilver to complete:

zpool detach poolname /dev/old

This has vast benefits:

* reliability (no bit's can be lost since there is end to end
checksumming)
* performance benefits (only blocks actually in use will be copied,
possibly defragmenting them on the fly
* if the process fails, or you reboot halfway, at any time:
- it will restart after the reboot, without any risk
- there won't be a risk of corruption (that you'd have
potentially when using dd: dd would create two disks with *IDENTICAL*
vdev labels, where they should actually have unique ID's. Once [you get
confused]/[you get zfs-fuse confused] about what disk is the real vdev,
there could be data loss.

I think there could be more relevant points, but I don't have time to
check my thoughts for completeness now - gotta run

Cheers
Seth
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Daniel Smedegaard Buus
2011-08-12 16:42:45 UTC
Permalink
On 08/11/2011 10:56 AM, Daniel Smedegaard Buus wrote:> If you have your six 2TB drives and your 6 3TB drives, you can just
Post by Daniel Smedegaard Buus
use dd to mirror the smaller drives onto the larger ones, and then use
a partitioning tool to resize the 2TB partitions on the 3TB drives to
fill the entire space. ZFS will then automatically expand when you
import the pool afterwards.
Why would anyone use dd -- and loose the protection by checksumming
You could easily
     zpool attach poolname /dev/old /dev/new
     zpool detach poolname /dev/old
You can use attach to mirror a RAIDZ pool? Are you sure about this?
AFAIK it applies only to attaching extra devices to mirror pools...
How do you do that in practice? And can you do that with six devices
in one go? Otherwise, you'd be running 6 times resilvering which could
take forever.

While dd doesn't protect any data, what you're mirroring is a vdev
member, not the pool data. So in a subsequent scrub on a RAIDZ2 pool,
well, I'm not a statistician, but I'd say the chances of dd causing
errors in three members at the exact same platter location, are pretty
darn slim - probably slimmer than me getting hit by an airplane by
tomorrow :) Plus, you'd still have your old pool in case anything goes
wrong. If something goes wrong while attaching/mirrorring/detaching/
resilvering 6 drives in sequence, there'll be no backup pool to go to.
  * reliability (no bit's can be lost since there is end to end
checksumming)
  * performance benefits (only blocks actually in use will be copied,
possibly defragmenting them on the fly
        - it will restart after the reboot, without any risk
Unless it doesn't, in which case your pool is gone :)
        - there won't be a risk of corruption (that you'd have
potentially when using dd: dd would create two disks with *IDENTICAL*
vdev labels, where they should actually have unique ID's.
Except the point would be to create a duplicate of your pool - so you
would want the identical vdev labels. The old drives should be
properly wiped, though, so there's no confusion if you were to
reinsert them and have them present in your system while an import was
going on.
Once [you get
confused]/[you get zfs-fuse confused] about what disk is the real vdev,
there could be data loss.
Good point, so either way, remember to wipe those old drives (100 megs
at start and end of the raw block device is plenty plenty - done this
myself numerous times). But if there's a smarter attach/detach way to
duplicate your pool, most def that should be the way to go.
I think there could be more relevant points, but I don't have time to
check my thoughts for completeness now - gotta run
Cheers
Seth
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2011-08-12 18:32:13 UTC
Permalink
Post by Daniel Smedegaard Buus
You can use attach to mirror a RAIDZ pool? Are you sure about this?
AFAIK it applies only to attaching extra devices to mirror pools...
Maybe you are right. I should check that. But at least it applies to
single vdevs as well (when the toplevel is striped only, you can attach
mirrors to any which of the vdevs).

(I admit I'm biased because I (almost) exclusively use mirrored setups
for reduced complexity, CPU load and and highest read performance)
Post by Daniel Smedegaard Buus
And can you do that with six devices
in one go? Otherwise, you'd be running 6 times resilvering which could
take forever.
A resilver is _not_ a scrub! They are close cousins, but a resilver is
_much_ optimized. And yes, if your pool layout permits the attach
operation, it will permit them in parallell just the same. However, I'd
do them sequentially, reducing complexity and performance effects
(controller, CPU bottlenecks or seek time degradations).

In all cases, resilvering a fresh mirror is going to outperform a dd of
the same vdev **by definition** (possibly by a large margin).
Post by Daniel Smedegaard Buus
While dd doesn't protect any data, what you're mirroring is a vdev
member, not the pool data. So in a subsequent scrub on a RAIDZ2 pool,
well,
Well as always, common sense applies.

What you say is valid, but I didn't assume any pool layout
(specifically, redundancy other than copies=n). Perhaps I did wrong to
respond to the message in isolation, but as things go, people read
advice from isolated posts coming from a search engine, so a warning
seemed in order.
Post by Daniel Smedegaard Buus
Unless it doesn't, in which case your pool is gone :)
How? You don't modify the source leg of the mirror. So nothing is lost.
Also, it _will_ continue.
Post by Daniel Smedegaard Buus
The old drives should be properly wiped
There you go: backup destroyed (and you'd need to do that before even
trying to import the pool using the cloned vdev... Or you'd be removing
disks physically; it'll be hard to convince me that is more reliable
than using replace or attach/detach. I think the risk of damaging a disk
(or the system) when installing hardware is greater than that of any
bitrot or dd corruption, to name a random comparison.)
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2012-01-06 14:59:15 UTC
Permalink
Woah! you made it to theDailyWTF!

http://thedailywtf.com/Articles/Calories-Math,-Exacting-Password-Requirements,-and-More.aspx

LOL!
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Kevin van der Vlist
2012-01-06 15:03:20 UTC
Permalink
Ah, thats where I was recognizing the name from. It sounded familiar,
but I couldn't remember where I knew it from :p.

Nice picture though, time to celebrate it with an avocado...

regards,

Kevin van der Vlist
Post by sgheeren
Woah! you made it to theDailyWTF!
http://thedailywtf.com/Articles/Calories-Math,-Exacting-Password-Requirements,-and-More.aspx
LOL!
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Daniel Smedegaard Buus
2012-06-01 16:56:52 UTC
Permalink
Post by sgheeren
Woah! you made it to theDailyWTF!
http://thedailywtf.com/Articles/Calories-Math,-Exacting-Password-Requirements,-and-More.aspx
LOL!
How did you make that match?!?! Nicely spotted :D
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2012-06-02 11:31:44 UTC
Permalink
Post by sgheeren
Woah! you made it to theDailyWTF!
http://thedailywtf.com/Articles/Calories-Math,-Exacting-Password-Requirements,-and-More.aspx
LOL!
How did you make that match?!?! Nicely spotted :D
Erm, not many people share the name *Daniel "Smedegaard" Buus *:)
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Loading...