ZFS administration
From GridPP Wiki
								Revision as of 15:43, 22 February 2017 by Peter Love f756cf998c  (Talk | contribs)
check pool status
 [root@pool5 ~]# zpool status
    pool: tank-2TB
     state: ONLINE
      scan: scrub in progress since Mon May 16 11:30:03 2016
        1.65T scanned out of 59.4T at 125M/s, 134h57m to go
        0 repaired, 2.79% done
      config:
	NAME                                        STATE     READ WRITE CKSUM
	tank-2TB                                    ONLINE       0     0     0
	  raidz3-0                                  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b25520b22e10  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b2111ca79173  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b26e2232640f  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b2622171ff0b  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b293245ae11b  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b2d128122551  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b2461fc3cb38  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b3142c0ac749  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b27d230ce0f6  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b2b426603fbe  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b2f42a2e3006  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b2c4274ec795  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b3082b59c1da  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b31f2cbeb648  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b33b2e6172a4  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b34c2f70184c  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b3793218970b  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b36530e4cb2a  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b39033741ccf  ONLINE       0     0     0
	    scsi-36a4badb044e936001e830e00b16015f4  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b38532cf8c68  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b3f839a79c83  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b4023a46ae2a  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b3eb38df509e  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b42b3cb239cd  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b41d3bdd5a86  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b40f3b0100cb  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b4363d55d784  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b3de3822569a  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b4413e09a9f1  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b44e3ecc77ea  ONLINE       0     0     0
	    scsi-36a4badb044e936001e55b358301ee6a2  ONLINE       0     0     0
	    scsi-36a4badb044e936001e830de6afdb1d8f  ONLINE       0     0     0
	    scsi-36a4badb044e936001e830df1b0805a46  ONLINE       0     0     0
	    scsi-36a4badb044e936001e830dfab1049dc8  ONLINE       0     0     0
   errors: No known data errors
- Read: amount of read errors
 - Write: amount of write errors
 -  Checksum: amount of wrongly read data from disk (checksum and data do not match)
- number usually only increases during scrub, but not when occured during normal operation
 
 
checking read/write/IOPS
[root@pool5 ~]# zpool iostat 1 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank-2TB    59.4T  4.13T    407    656  47.9M  37.2M
tank-2TB    59.4T  4.13T  1.42K      0   167M      0
tank-2TB    59.4T  4.13T  1.54K      0   181M      0
tank-2TB    59.4T  4.13T  1.55K      0   182M      0
tank-2TB    59.4T  4.13T  1.45K      0   173M      0
-  first line is average since switched on or imported
- can be suppressed with option "-y"
 
 - unit is "per second"
 - can be combined with per disk statistics: zpool iostat -v tank-2TB 1 5
 
Sharing via NFS
Prerequisites on CentOS7. Start the services if you're not rebooting.
yum install -y nfs-utils systemctl enable zfs.target systemctl enable zfs-import-cache.service systemctl enable zfs-mount.service systemctl enable zfs-share.service systemctl enable nfs-server firewall-cmd --permanent --zone=work --add-service=nfs firewall-cmd --reload
To share via NFS use the 'zfs set' command with a list of hosts/networks. This injects the details into the NFS exports table.
zfs create tank/data chmod 1777 /tank/data zfs set sharenfs="rw=@physics*.example.ac.uk,rw=@foobar.example.ac.uk,rw=@172.16.0.0/24" tank/data
Should now be mountable on the client.
mount zfsserver:/tank/data /mnt
Other helpful commands
- list all ZFS: zfs list
 - list all zpools: zpool list
 - history of all commands: zpool history
 - list all events happened: zpool events
 -  scan the whole zpool for bad data: zpool scrub tank-2TB
- every block of data, metadata, and ditto-block is compared against it's checksum
 -  very long and I/O intensive process
- but runs with low priority during normal production access
 
 - best to have cronjob setup to have it done automatically once per month/2 month/...
 
 - get compressratio: zfs get compressratio
 - import a zpool: zpool import NAME
 - import all available zpools: zpool import -a
 - see current cache statistics: arcstat.py 1
 
-  zfs and zpool have very good man pages
- man zpool
 - man zfs