Discussion:
No space left on device but zpool shows only 69% usage
rolo
2011-10-27 13:45:40 UTC
Permalink
Hi,

after 1 year finaly the zfs filesytem got unexpected full.
Zpool shows up 421G of 1.33T (69%) free.
(I only know of 1/64th of the pool is reserved for allocation
efficiency)
But the zfs filesystem ist full!

And a have no idea where from this limitation comes?

# df -h
Filesystem Size Used Avail Use% Mounted on
zfs 20G 20G 0 100% /zfs
zfs/drbd_backup 26T 26T 0 100% /zfs/drbd_backup
zfs/vboximg_backup 136G 136G 0 100% /zfs/vboximg_backup


OS: RHEL5 (Linux host 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT
2011 x86_64 x86_64 x86_64 GNU/Linux)

ZFS Version: zfs-fuse-0.6.9_p1-6.20100709git.el5.1 (from epel archive)

ZFS Parameters:

# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zfs 1.33T 939G 421G 69% 29.48x ONLINE -

# zpool status
pool: zfs
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zfs ONLINE 0 0 0
mpath/zfs-backup-1 ONLINE 0 0 0

errors: No known data errors

# zpool get all zfs
NAME PROPERTY VALUE SOURCE
zfs size 1.33T -
zfs capacity 69% -
zfs altroot - default
zfs health ONLINE -
zfs guid 5648716826252037221 default
zfs version 23 default
zfs bootfs - default
zfs delegation on default
zfs autoreplace off default
zfs cachefile - default
zfs failmode wait default
zfs listsnapshots on local
zfs autoexpand off default
zfs dedupditto 0 default
zfs dedupratio 29.48x -
zfs free 421G -
zfs allocated 939G -

# zfs list -r -t all
NAME USED AVAIL REFER MOUNTPOINT
zfs 25.9T 0 19.1G /zfs
zfs/drbd_backup 25.8T 0 25.8T /zfs/drbd_backup
zfs/vboximg_backup 135G 0 135G /zfs/vboximg_backup

# zfs get all zfs
NAME PROPERTY VALUE SOURCE
zfs type filesystem -
zfs creation Tue Sep 28 16:04 2010 -
zfs used 25.9T -
zfs available 0 -
zfs referenced 19.1G -
zfs compressratio 1.57x -
zfs mounted yes -
zfs quota none default
zfs reservation none default
zfs recordsize 128K default
zfs mountpoint /zfs default
zfs sharenfs off default
zfs checksum on default
zfs compression on local
zfs atime on default
zfs devices on default
zfs exec on default
zfs setuid on default
zfs readonly off default
zfs zoned off default
zfs snapdir visible local
zfs aclmode groupmask default
zfs aclinherit restricted default
zfs canmount on default
zfs xattr on default
zfs copies 1 default
zfs version 4 -
zfs utf8only off -
zfs normalization none -
zfs casesensitivity sensitive -
zfs vscan off default
zfs nbmand off default
zfs sharesmb off default
zfs refquota none default
zfs refreservation none default
zfs primarycache metadata local
zfs secondarycache metadata local
zfs usedbysnapshots 0 -
zfs usedbydataset 19.1G -
zfs usedbychildren 25.9T -
zfs usedbyrefreservation 0 -
zfs logbias latency default
zfs dedup on local
zfs mlslabel on -

# zdb -DD zfs
DDT-sha256-zap-duplicate: 6459147 entries, size 314 on disk, 164 in
core
DDT-sha256-zap-unique: 12140196 entries, size 315 on disk, 169 in core

DDT histogram (aggregated over all DDTs):

bucket allocated referenced
______ ______________________________
______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE
DSIZE
------ ------ ----- ----- ----- ------ ----- -----
-----
1 11.6M 1.45T 459G 459G 11.6M 1.45T 459G
459G
2 1.48M 190G 83.3G 83.3G 3.41M 437G 195G
195G
4 873K 109G 49.7G 49.7G 4.53M 580G 255G
255G
8 339K 42.4G 20.0G 20.0G 3.69M 473G 221G
221G
16 581K 72.6G 34.6G 34.6G 14.4M 1.80T 894G
894G
32 1.24M 158G 98.4G 98.4G 62.2M 7.78T 4.92T
4.92T
64 1.47M 188G 136G 136G 158M 19.8T 14.4T
14.4T
128 154K 19.2G 13.5G 13.5G 29.9M 3.73T 2.65T
2.65T
256 62.5K 7.81G 4.60G 4.60G 22.2M 2.78T 1.62T
1.62T
512 5.13K 657M 301M 301M 3.14M 402G 183G
183G
1K 166 20.8M 7.49M 7.49M 234K 29.2G 10.5G
10.5G
2K 48 6M 1.84M 1.84M 123K 15.3G 4.43G
4.43G
4K 25 3.12M 366K 366K 124K 15.5G 1.76G
1.76G
8K 4 512K 27K 27K 49.0K 6.12G 341M
341M
16K 3 384K 18.5K 18.5K 62.1K 7.76G 381M
381M
2M 2 256K 18K 18K 6.03M 772G 54.3G
54.3G
4M 1 128K 4.50K 4.50K 6.30M 806G 28.3G
28.3G
Total 17.7M 2.22T 899G 899G 326M 40.8T 25.9T
25.9T

dedup = 29.49, compress = 1.57, copies = 1.00, dedup * compress /
copies = 46.42

Kind Regards, Roland
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2011-10-27 21:04:48 UTC
Permalink
Roland, I honestly have no idea. I have given all the options a good
look. I would check the following things in order:

- how do you fare when `dd if=/dev/zero bs=128k of=somenewfile.img`
(test maximum compression/holey files)
- how do you fare when duplicating known existing files (do deduped
files still get created)
- what gets reported when using the pool from OpenSolaris (you'd want
to rig up a OpenSolaris (derived) virtual machine).

I know that's a lot of work. But I don't have any smarter leads from
what you're showing (no volumes, no quota, no reservations, no
snapshots, no clones, no zraid-n, no copies=x etc...)

Cheers,
Seth
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Fajar
2011-11-06 13:34:12 UTC
Permalink
Has the OP got this resolved?
I'm curious too.
Post by rolo
Hi,
after 1 year finaly the zfs filesytem got unexpected full.
Zpool shows up 421G of 1.33T (69%) free.
(I only know of 1/64th of the pool is reserved for allocation
efficiency)
But the zfs filesystem ist full!
And a have no idea where from this limitation comes?
# df -h
Filesystem            Size  Used Avail Use% Mounted on
zfs                    20G   20G     0 100% /zfs
zfs/drbd_backup        26T   26T     0 100% /zfs/drbd_backup
zfs/vboximg_backup    136G  136G     0 100% /zfs/vboximg_backup
OS: RHEL5 (Linux host 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT
2011 x86_64 x86_64 x86_64 GNU/Linux)
ZFS Version: zfs-fuse-0.6.9_p1-6.20100709git.el5.1 (from epel archive)
# zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zfs   1.33T   939G   421G    69%  29.48x  ONLINE  -
# zpool status
  pool: zfs
 state: ONLINE
 scrub: none requested
        NAME                  STATE     READ WRITE CKSUM
        zfs                   ONLINE       0     0     0
          mpath/zfs-backup-1  ONLINE       0     0     0
errors: No known data errors
# zpool get all zfs
NAME  PROPERTY       VALUE       SOURCE
zfs   size           1.33T       -
zfs   capacity       69%         -
zfs   altroot        -           default
zfs   health         ONLINE      -
zfs   guid           5648716826252037221  default
zfs   version        23          default
zfs   bootfs         -           default
zfs   delegation     on          default
zfs   autoreplace    off         default
zfs   cachefile      -           default
zfs   failmode       wait        default
zfs   listsnapshots  on          local
zfs   autoexpand     off         default
zfs   dedupditto     0           default
zfs   dedupratio     29.48x      -
zfs   free           421G        -
zfs   allocated      939G        -
# zfs list -r -t all
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zfs                 25.9T      0  19.1G  /zfs
zfs/drbd_backup     25.8T      0  25.8T  /zfs/drbd_backup
zfs/vboximg_backup   135G      0   135G  /zfs/vboximg_backup
# zfs get all zfs
NAME  PROPERTY              VALUE                  SOURCE
zfs   type                  filesystem             -
zfs   creation              Tue Sep 28 16:04 2010  -
zfs   used                  25.9T                  -
zfs   available             0                      -
zfs   referenced            19.1G                  -
zfs   compressratio         1.57x                  -
zfs   mounted               yes                    -
zfs   quota                 none                   default
zfs   reservation           none                   default
zfs   recordsize            128K                   default
zfs   mountpoint            /zfs                   default
zfs   sharenfs              off                    default
zfs   checksum              on                     default
zfs   compression           on                     local
zfs   atime                 on                     default
zfs   devices               on                     default
zfs   exec                  on                     default
zfs   setuid                on                     default
zfs   readonly              off                    default
zfs   zoned                 off                    default
zfs   snapdir               visible                local
zfs   aclmode               groupmask              default
zfs   aclinherit            restricted             default
zfs   canmount              on                     default
zfs   xattr                 on                     default
zfs   copies                1                      default
zfs   version               4                      -
zfs   utf8only              off                    -
zfs   normalization         none                   -
zfs   casesensitivity       sensitive              -
zfs   vscan                 off                    default
zfs   nbmand                off                    default
zfs   sharesmb              off                    default
zfs   refquota              none                   default
zfs   refreservation        none                   default
zfs   primarycache          metadata               local
zfs   secondarycache        metadata               local
zfs   usedbysnapshots       0                      -
zfs   usedbydataset         19.1G                  -
zfs   usedbychildren        25.9T                  -
zfs   usedbyrefreservation  0                      -
zfs   logbias               latency                default
zfs   dedup                 on                     local
zfs   mlslabel              on                     -
#  zdb -DD zfs
DDT-sha256-zap-duplicate: 6459147 entries, size 314 on disk, 164 in
core
DDT-sha256-zap-unique: 12140196 entries, size 315 on disk, 169 in core
bucket              allocated                       referenced
______   ______________________________
______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE
DSIZE
------   ------   -----   -----   -----   ------   -----   -----
-----
     1    11.6M   1.45T    459G    459G    11.6M   1.45T    459G
459G
     2    1.48M    190G   83.3G   83.3G    3.41M    437G    195G
195G
     4     873K    109G   49.7G   49.7G    4.53M    580G    255G
255G
     8     339K   42.4G   20.0G   20.0G    3.69M    473G    221G
221G
    16     581K   72.6G   34.6G   34.6G    14.4M   1.80T    894G
894G
    32    1.24M    158G   98.4G   98.4G    62.2M   7.78T   4.92T
4.92T
    64    1.47M    188G    136G    136G     158M   19.8T   14.4T
14.4T
   128     154K   19.2G   13.5G   13.5G    29.9M   3.73T   2.65T
2.65T
   256    62.5K   7.81G   4.60G   4.60G    22.2M   2.78T   1.62T
1.62T
   512    5.13K    657M    301M    301M    3.14M    402G    183G
183G
    1K      166   20.8M   7.49M   7.49M     234K   29.2G   10.5G
10.5G
    2K       48      6M   1.84M   1.84M     123K   15.3G   4.43G
4.43G
    4K       25   3.12M    366K    366K     124K   15.5G   1.76G
1.76G
    8K        4    512K     27K     27K    49.0K   6.12G    341M
341M
   16K        3    384K   18.5K   18.5K    62.1K   7.76G    381M
381M
    2M        2    256K     18K     18K    6.03M    772G   54.3G
54.3G
    4M        1    128K   4.50K   4.50K    6.30M    806G   28.3G
28.3G
 Total    17.7M   2.22T    899G    899G     326M   40.8T   25.9T
25.9T
dedup = 29.49, compress = 1.57, copies = 1.00, dedup * compress /
copies = 46.42
Kind Regards, Roland
--
To post to this group, send email to zfs-***@googlegroups.com
To visit our Web site, click o
rolo
2011-11-07 10:42:51 UTC
Permalink
No, the issue is still present and I'm investigating it.
I intend to do the tests Seth has suggested. But the expire procedure
has freed up a lot of space and I hat to wait until it is again
filled.


I also copy over the 96T data to a new created zfs-fuse on a
different server for more tests. This is currently ongoing and will
compelete in about ~10 days.
Currently here are copied over 30T and a see already far more
space "lost" than the 1/64 pool reserve:

# zpool list;zfs list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zfsbakpool 984G 440G 544G 44% 19.57x ONLINE -
NAME USED AVAIL REFER MOUNTPOINT
zfsbakpool 8.18T 405G 8.17T /zfsbakpool

My observation is that as soon as deduplication factors increases
also the free space gap start to increase.
BTW: The FS content mostfy consists of daily copies of the
blockdevices
of virtual machines. About 1000 files which are in average
about 40G each.

Kind Regards, Roland
Post by Fajar
Has the OP got this resolved?
I'm curious too.
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Loading...