Zfs: Difference between revisions

From DISI
Jump to navigation Jump to search
mNo edit summary
 
(24 intermediate revisions by 4 users not shown)
Line 1: Line 1:
ZFS - Zettabyte Filesystem
== Introduction ==
ZFS - Zettabyte File System. It's the best for redundant data storage and one of the best ways for RAID setups.


== ZFS packages installation ==
== Installation ==
https://www.symmcom.com/docs/how-tos/storages/how-to-install-zfs-on-centos-7
#Install CentOS 7. Make sure it has access to internet.
#Create a Foreman Entry [https://foreman.ucsf.bkslab.org/hosts here]
#Become root and run these commands. These will install all the necessary packages plus ZFS and also enable all the firewall rules necessary.
#*<source>#!/bin/bash


ZFS rpm pkg must match the CentOS version install in the machine
yum install epel-release -y
* Check CentOS version
yum update -y
$ cat /etc/centos-release
yum install puppet -y
CentOS Linux release 7.8.2003 (Core)
yum install sssd -y
yum install nss-pam-ldapd -y
yum install oddjob-mkhomedir -y
systemctl start oddjobd
systemctl enable oddjobd
puppet agent -t
yum install https://zfsonlinux.org/epel/zfs-release.el7_9.noarch.rpm -y
yum install zfs -y


* Install ZFS-release package. In this case, you will need to install the package for CentOS 7.8 version
systemctl start nfs
$ yum install http://download.zfsonlinux.org/epel/zfs-release.el7_8.noarch.rpm
systemctl enable nfs
systemctl start zfs.target
* Edit /etc/yum.repos.d/zfs.repo
systemctl enable zfs.target
The ZFS package that we want to install is zfs-kmod
systemctl start zfs-import-cache.service
$ vim /etc/yum.repos.d/zfs.repo
systemctl enable zfs-import-cache.service
There are 2 items to change
firewall-cmd --permanent --add-service=nfs
[zfs]
firewall-cmd --permanent --add-service=mountd
name=ZFS on Linux for EL7 - dkms
firewall-cmd --permanent --add-service=rpc-bind
baseurl=http://download.zfsonlinux.org/epel/7.8/$basearch/
firewall-cmd --reload
enabled=1 -> <b>change to 0</b>
</source>
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
[zfs-kmod]
name=ZFS on Linux for EL7 - kmod
baseurl=http://download.zfsonlinux.org/epel/7.8/kmod/$basearch/
enabled=0 <b>change to 1</b>
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux


* Install ZFS
== How to Create the Zpool ==
$ yum install zfs
#First determine which ones the SSD drives are because that will be the log and cache of the zfs filesystem. Use the ones that you didn't use for the OS.
#*<source>lsblk -o NAME,SIZE,SERIAL,LABEL,FSTYPE</source>
#*2 x 240GB SSDs will be configured to be mirrors of each other for logging and 1 x 480GB SSD will be for cache.
#Lets now create the pool:
#*<source>
zpool create <options> <name-of-pool> raidz2 <hdd-1> <hdd-2> <hdd-3> <hdd-4> <hdd-5> <hdd-6>\
                                      raidz2 <hdd-7> <hdd-8> <hdd-9> <hdd-10> <hdd-11> <hdd-12>\
                                      raidz2 <hdd-13> <hdd-14> <hdd-15> <hdd-16> <hdd-17> <hdd-18>\
                                      raidz2 <hdd-19> <hdd-20> <hdd-21> <hdd-22> <hdd-23> <hdd-24>\
                                      raidz2 <hdd-25> <hdd-26> <hdd-27> <hdd-28> <hdd-29> <hdd-30>\
                                      raidz2 <hdd-31> <hdd-32> <hdd-33> <hdd-34> <hdd-35> <hdd-36>\
                                      log mirror <ssd-1> <ssd-2>
                                      cache <ssd-3>
</source>
#Here is an example of how to create a zpool:
#*<source>zpool create -f exj raidz2 sdf sdg sdh sdi sdj sdk\
              raidz2 sdl sdm sdn sdo sdp sdq\
              raidz2 sdr sds sdt sdu sdv sdw\
              raidz2 sdx sdy sdz sdaa sdab sdac\
              raidz2 sdad sdae sdaf sdag sdah sdai\
              raidz2 sdaj sdak sdal sdam sdan sdao\
              log mirror sdc sdd\
              cache sde
</source>
#Once the zpool has been created, double check by using one of these commands
#*<source>
zfs list
zpool status
</source>
#Here comes a weird part. Locating broken disks is hard with 'sd*' naming convention since it changes with every reboot. Therefore, we will change it to their id's.
#*<source>
zpool export <zpool-name>
zpool import -d /dev/disk/by-id -aN
</source>
#Now, mount the zpool into a directory in the machine. We usually create a new directory called '/export/<zpool-name>'.
#*<source>zfs set mountpoint=/export/<zpool-name> <zpool-name></source>
#For example
#*<source>zfs set mountpoint=/export/exa exa</source>
#Double check that the zpool mounted to the directory by checking the disk space in that directory
#*<source>df -h /export/<zpool-name></source>
#Lastly, reboot the machine to see if the zpool will mount automatically. If it doesn't, once it turns on, run
#*<source>modprobe zfs
zpool import -a
</source>
#Then reboot again and it should remount itself.


* Useful Commands to see if any disk are avaible
== Exporting the Zpool to the Cluster ==
$ lsblk -f
#Add the rules on where to export the zpool
$ fdisk -l
#*<source>vim /etc/exports</source>
$ fdisk -l  /dev/sd* |grep  '^Disk' |grep sectors |grep -v experimental
#Then, add this inside. Replace <> with the respective information.
 
#*<source>
== Beginning ZFS instances ==
/export/<zpool-name>     10.20.0.0/16(rw,async,fsid=<unused-id>,no_subtree_check) \
 
                        169.230.26.0/24(rw,async,fsid=<unused-id>,no_subtree_check) \
There are only two commmands to interact with ZFS. 
                        169.230.90.0/24(rw,async,fsid=<unused-id>,no_subtree_check) \
 
                        169.230.91.0/24(rw,async,fsid=<unused-id>,no_subtree_check) \
zpool: used to create a ZFS vdev (virtual device).  vdevs are composed of physical devices. 
                        169.230.92.0/24(rw,async,fsid=<unused-id>,no_subtree_check)
zfs: used to create/interact with a ZFS dataset.  ZFS datasets are akin to logical volumes
</source>
 
#For example
# zpool creation syntax
#*<source>
zpool create <poolname> <vdev(s)>  
/export/exl    10.20.0.0/16(rw,async,fsid=547,no_subtree_check) \
# Create a zpool of six raidz2 vdevs, each with six drives. Includes two SSDs to used as a mirrored SLOG and one SSD as an L2ARC read cache.  (example commmand was run on qof)
                169.230.26.0/24(rw,async,fsid=547,no_subtree_check) \
zpool create ex9 raidz2 sda sdb sdc sdd sde sdf raidz2 sdg sdh sdi sdj sdk sdl raidz2 sdm sdn sdo sdp sdq sdr raidz2 sds sdt sdu sdv sdw sdx raidz2 sdy sdz sdaa sdab sdac sdad raidz2 sdae sdaf sdag sdah sdai sdaj log mirror ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN cache ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN
                169.230.90.0/24(rw,async,fsid=547,no_subtree_check) \
  [root@qof ~]# zpool status
                169.230.91.0/24(rw,async,fsid=547,no_subtree_check) \
  pool: ex9
                169.230.92.0/24(rw,async,fsid=547,no_subtree_check)
  state: ONLINE
</source>
  scan: none requested
#Export the rules
  config:
#*<source>exportfs -a</source>
  NAME                                            STATE    READ WRITE CKSUM
#In another machine, check that the rules are applied
  ex9                                            ONLINE      0    0    0
#*<source>showmount -e <machine-name></source>
  raidz2-0                                      ONLINE      0     0     0
#Then follow [[PuppetTricks#Adding_new_mount_point_to_Puppet|this guide]] to add this new machine to the puppet module
    sda                                        ONLINE      0    0    0
    sdb                                        ONLINE      0    0    0
    sdc                                        ONLINE      0    0    0
    sdd                                        ONLINE      0    0    0
    sde                                        ONLINE      0    0    0
    sdf                                        ONLINE      0    0    0
  raidz2-1                                      ONLINE      0    0    0
    sdg                                        ONLINE      0     0    0
    sdh                                        ONLINE      0    0    0
    sdi                                        ONLINE      0    0    0
    sdj                                        ONLINE      0    0    0
    sdk                                        ONLINE      0    0    0
    sdl                                        ONLINE      0    0    0
  raidz2-2                                      ONLINE      0    0    0
    sdm                                        ONLINE      0     0    0
    sdn                                        ONLINE      0    0    0
    sdo                                        ONLINE      0    0    0
    sdp                                        ONLINE      0    0    0
    sdq                                        ONLINE      0    0    0
    sdr                                        ONLINE      0    0    0
  raidz2-3                                      ONLINE      0    0    0
    sds                                        ONLINE      0     0    0
    sdt                                        ONLINE      0    0    0
    sdu                                        ONLINE      0    0    0
    sdv                                        ONLINE      0    0    0
    sdw                                        ONLINE      0    0    0
    sdx                                        ONLINE      0    0    0
  raidz2-4                                      ONLINE      0    0    0
    sdy                                        ONLINE      0     0    0
    sdz                                        ONLINE      0    0    0
    sdaa                                        ONLINE      0    0    0
    sdab                                        ONLINE      0    0    0
    sdac                                        ONLINE      0    0    0
    sdad                                        ONLINE      0    0    0
  raidz2-5                                      ONLINE      0    0    0
    sdae                                        ONLINE      0    0    0
    sdaf                                        ONLINE      0    0    0
    sdag                                        ONLINE      0    0    0
    sdah                                        ONLINE      0    0    0
    sdai                                        ONLINE      0    0    0
    sdaj                                        ONLINE      0    0    0
  logs
  mirror-6                                      ONLINE      0    0    0
    ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN  ONLINE      0    0    0
    ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN  ONLINE      0    0    0
  cache
  ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN    ONLINE      0    0    0
 
Adding a zfs filesystem:
 
Using qof as an example, I will create a child filesystem under ex9 named archive that will be mounted under /export/ex9/archive. This archive will be used to backup user data.
 
-bash-4.2$ zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
ex9          2.39T  249T  2.39T  /export/ex9
-bash-4.2$ sudo zfs create -o mountpoint=/export/ex9/archive ex9/archive
-bash-4.2$ zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
ex9          2.39T  249T  2.39T  /export/ex9
ex9/archive  192K  249T  192K  /export/ex9/archive
 
== Testing ==
 
NOTE: It is important to reboot machine to see if the zpool stay on.  
 
== Add alias to machine and mount point to puppet ==
 
Please see [http://wiki.docking.org/index.php/PuppetTricks#Adding_aliases_for_server_on_Alpha this section]
 
== Adding L2ARC Read Cache to a zpool==
# Look for available SSDs in /dev/disk/by-id/
# Choose an available SSD to use for read cache.  Then decide which pool you want to put the cache on.
Syntax: zpool add <zpool name> <cache/log> <path to disk>
  $ sudo zpool add ex6 cache /dev/disk/by-id/ata-INTEL_SSDSC2KG480G7_BTYM72830AV6480BGN


== Tuning ZFS options ==
== Tuning ZFS options ==
Line 166: Line 137:
Check data integrity, traverses all the data in the pool once and verifies that all blocks can be read:
Check data integrity, traverses all the data in the pool once and verifies that all blocks can be read:
  zpool scrub <pool_name>
  zpool scrub <pool_name>
 
To stop scrub:
To stop scrub:
  zpool scrub -s <pool_name>
  zpool scrub -s <pool_name>


== mount after reboot ==
zfs set mountpoint=/export/db2 db2
== when you put in a new disk ==
fdisk -l
to see what is new
sudo zpool create -f /srv/db3 raidz2 /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak  /dev/sdal 
sudo zpool add -f /srv/db3 raidz2  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav  /dev/sdaw  /dev/sdax
zfs unmount db3
zfs mount db3
= latest =
zpool create -f db3 raidz2  /dev/sdy /dev/sdz  /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj
zpool add -f db3 raidz2 /dev/sdak  /dev/sdal  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav
zpool create -f db4 raidz2 /dev/sdax /dev/sday /dev/sdaz /dev/sdba  /dev/sdbb  /dev/sdbc  /dev/sdbd  /dev/sdbe  /dev/sdbf  /dev/sdbg  /dev/sdbh  /dev/sdbi
zpool add -f db4 raidz2 /dev/sdbj /dev/sdbk /dev/sdbl /dev/sdbm /dev/sdbn /dev/sdbo /dev/sdbp /dev/sdbq /dev/sdbr /dev/sdbs /dev/sdbt /dev/sdbu
= Fri Jan 19 2018 =
zpool create -f db5 raidz2 /dev/sdbw /dev/sdbx /dev/sdby /dev/sdbz /dev/sdca  /dev/sdcb  /dev/sdcc  /dev/sdcd  /dev/sdce  /dev/sdcf  /dev/sdcg  /dev/sdch
zpool add -f db5 raidz2 /dev/sdci /dev/sdcj /dev/sdck /dev/sdcl /dev/sdcm /dev/sdcn /dev/sdco /dev/sdcp /dev/sdcq /dev/sdcr /dev/sdcs /dev/sdct
zfs mount db5
= Wed Jan 24 2018 =
On tsadi
zpool create -f ex1 mirror /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae
zpool add -f ex1 mirror /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj
zpool create -f ex2 mirror /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj
zpool add -f ex2 /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo
zpool create -f ex3 mirror /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt
zpool add -f ex3 mirror /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy
zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal
zpool add -f ex4 mirror /dev/sdam /dev/sdan /dev/sdao
On tsadi
zpool create -f ex1 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror  /dev/sdai /dev/sdaj
zpool create -f ex2 mirror  /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi mirror /dev/sdj /dev/sdk mirror /dev/sdl /dev/sdm mirror /dev/sdn /dev/sdo
zpool create -f ex3 mirror /dev/sdp /dev/sdq mirror /dev/sdr /dev/sds mirro /dev/sdt /dev/sdu mirror /dev/sdv /dev/sdw mirror /dev/sdx /dev/sdy
zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal  mirror /dev/sdam mirror /dev/sdan /dev/sdao
On lamed
zpool create -f ex5 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror  /dev/sdai /dev/sdaj
zpool create -f ex6 mirror  /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf mirror /dev/sdg /dev/sdh mirror /dev/sdi /dev/sdj
zpool create -f ex7 mirror  /dev/sdk /dev/sdl mirror /dev/sdm /dev/sdn mirror /dev/sdo /dev/sdp mirror /dev/sdq /dev/sdr mirror /dev/sds /dev/sdt
zpool create -f ex8 mirror /dev/sdu /dev/sdv mirror /dev/sdw /dev/sdx mirror /dev/sdy /dev/sdz
= Sun Jan 19 2020 =
on mem2,  sql system, note sda and sdc are system disks
zpool create -f sql1 raidz2  /dev/sdb /dev/sdc /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm
zpool add    -f sql1 raidz2  /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx
transform db4 on n-9-22 from z2 to z0
zpool destroy db4
zpool create -f db4 raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy
zfs mount
== recovery from accidental pool destruction ==
== recovery from accidental pool destruction ==
  umount /mnt /mnt2
  umount /mnt /mnt2
Line 265: Line 171:
NOTE:  If you destroyed your zpool with command 'zpool destroy', you can use the command 'zpool import' to view destroyed pools and recover the pool by doing 'zpool import <zpool name>'.
NOTE:  If you destroyed your zpool with command 'zpool destroy', you can use the command 'zpool import' to view destroyed pools and recover the pool by doing 'zpool import <zpool name>'.


= Troubleshooting =


== Panic!! The disk is full and I can't remove files! ==


If the zfs pool somehow gets completely filled up, to the point that "rm" no longer works, don't panic. The disk may seem full, but ZFS actually keeps a little bit of space free for internal operations (why rm is not one of these, I don't know). The amount of space reserved for this purpose is determined by the "spa_slop_shift" module parameter. You can find the value of this parameter @ /sys/module/zfs/parameters/spa_slop_shift


If you can't find the value of this parameter, you can calculate it from the ALLOC and FREE columns given by "zpool list", like so:
<nowiki>
spa_slop_shift = floor(log2(ALLOC/FREE))</nowiki>


== Thu Apr 16, 2020 ==
To free up a little bit more space on disk, you can increase the value of this parameter by one. Here's a real-world example of this, from when /nfs/exh filled up.
We destroyed old db2 on abacus. We put in 20 new disks 7.68 TB and 2 new 2.5 TB disks
<nowiki>
 
[root@nfs-exh ~]# df -h /nfs/exh
zpool create -f /scratch /dev/sdc /dev/sdd
Filesystem          Size  Used Avail Use% Mounted on
 
nfs-exh:/export/exh 349T 349T  0B 100% /mnt/nfs/exh
zpool create -f /srv/db2 raidz2 /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn
  zpool add -f /srv/db2 raidz2 /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx


OLD:
[root@nfs-exh ~]# zpool list
  sudo zpool create -f /srv/db3 raidz2 /dev/sdaa /dev/sdab /dev/sdac /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak  /dev/sdal  
NAME  SIZE ALLOC  FREE CKPOINT EXPANDSZ  FRAG    CAP DEDUP    HEALTH ALTROOT
  sudo zpool add -f /srv/db3 raidz2 /dev/sdam /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav  /dev/sdaw  /dev/sdax
exh    524T  524T 95.9G        -        -   91%    99% 1.00x    ONLINE -


[root@nfs-exh ~]# cat /sys/module/zfs/parameters/spa_slop_shift
12


== Mon Apr 20 2020 ==
[root@nfs-exh ~]# echo 13 > /sys/module/zfs/parameters/spa_slop_shift


zpool create -f db2 raidz2  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg /dev/sdh  /dev/sdi  /dev/sdj  /dev/sdk  /dev/sdl /dev/sdm  /dev/sdn
[root@nfs-exh ~]# df -h /nfs/exh
zpool add -f db2 raidz2      /dev/sdo  /dev/sdp  /dev/sdq  /dev/sdr  /dev/sds  /dev/sdt /dev/sdu /dev/sdv  /dev/sdw  /dev/sdx /dev/sdy /dev/sdz
Filesystem          Size Used Avail Use% Mounted on
nfs-exh:/export/exh 349T 349T  22G 100% /mnt/nfs/exh</nowiki>


zpool create -f db3 raidz2 /dev/sdaa  /dev/sdab /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak /dev/sdal
Once you've gotten a foothold on the disk and made some space, you should revert the spa_slop_shift parameter back to its original value.
zpool add -f db3 raidz2      /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav /dev/sdaw /dev/sdax
 
zpool create -f db5 raidz2  /dev/sday /dev/sdaz  /dev/sdba  /dev/sdbb /dev/sdbc  /dev/sdbd  /dev/sdbe  /dev/sdbf  /dev/sdbg  /dev/sdbh /dev/sdbi /dev/sdbj
zpool add -f db5 raidz2      /dev/sdbk  /dev/sdbl  /dev/sdbm  /dev/sdbn  /dev/sdbo  /dev/sdbp  /dev/sdbq  /dev/sdbr  /dev/sdbs /dev/sdbt /dev/sdbu /dev/sdbv
 
== Tue Apr 21 2020 ==
 
Ben's commands:
 
  fdisk -l 2>/dev/null | grep -o "zfs.*" > disk_ids
  split -n 3 disk_ids disk_id_
  db2_disks=`cat disk_id_aa`
  db3_disks=`cat disk_id_ab`
  db5_disks=`cat disk_id_ac`
  zpool create -f db2 raidz2 $db2_disks
  zpool create -f db3 raidz2 $db3_disks
  zpool create -f db5 raidz2 $db5_disks
  reboot
 
Amended commands, Apr 22- based on advice from john that vdevs should be limited to 12 disks each:
 
  fdisk -l 2>/dev/null | grep -o "zfs.*" > disk_ids
  split -n 6 disk_ids disk_id_
 
  db2_disks_1=`cat disk_id_aa`
  db2_disks_2=`cat disk_id_ab`
 
  db3_disks_1=`cat disk_id_ac`
  db3_disks_2=`cat disk_id_ad`
 
  db5_disks_1=`cat disk_id_ae`
  db5_disks_2=`cat disk_id_af`
 
  zpool create -f db2 raidz2 $db2_disks_1
  zpool add -f db2 raidz2 $db2_disks_2
  zpool create -f db3 raidz2 $db3_disks_1
  zpool add -f db3 raidz2 $db3_disks_2
  zpool create -f db5 raidz2 $db5_disks_1
  zpool add -f db5 raidz2 $db5_disks_2
 
== Mon Jul 20 2020 ==
*sda sdb are system disks
*sdc sdd are 240GB SSD
*sde is 480GB SSD
 
nfs-exb in n-1-30
 
zpool create -f exb raidz2 sdf sdg sdh sdi sdj sdk raidz2 sdl sdm sdn sdo sdp sdq raidz2 sdr sds sdt sdu sdv sdw raidz2 sdx sdy sdz sdaa sdab sdac  raidz2 sdad sdae sdaf sdag
sdah sdai raidz2 sdaj sdak sdal sdam sdan sdao log mirror sdc sdd cache sde
 
== Tue Jul 21 2020 ==
 
nfs-exc in n-1-109:
 
zpool create -f exc raidz2 sdf sdg sdh sdi sdj sdk\
                    raidz2 sdl sdm sdn sdo sdp sdq\
                    raidz2 sdr sds sdt sdu sdv sdw\
                    raidz2 sdx sdy sdz sdaa sdab sdac\ 
                    raidz2 sdad sdae sdaf sdag sdah sdai\
                    raidz2 sdaj sdak sdal sdam sdan sdao\
                    log mirror sdc sdd\
                    cache sde
 
nfs-exd in n-1-113:
 
zpool create -f exd raidz2 sdd sde sdf sdg sdh sdi\
                    raidz2 sdj sdk sdl sdm sdn sdo\
                    raidz2 sdp sdq sdr sds sdt sdu\
                    raidz2 sdv sdw sdx sdy sdz sdaa\ 
                    raidz2 sdab sdac sdad sdae sdaf sdag\
                    raidz2 sdah sdai sdaj sdak sdal sdam\
                    log mirror sdc sdan\
                    cache sdao
 
= Troubleshooting =


== zpool destroy : Failed to unmount <device> - device busy ==
== zpool destroy : Failed to unmount <device> - device busy ==
Line 435: Line 273:


Resilvering is the process of a disk being rebuilt from its parity group.  Once it is finished, you should be good to go again.  
Resilvering is the process of a disk being rebuilt from its parity group.  Once it is finished, you should be good to go again.  
==== Replace disk by disk ids ====


For zayin/nfs-exa, some of the disks are named by id instead of the vdev-id.
For zayin/nfs-exa, some of the disks are named by id instead of the vdev-id. It is recommended to use id instead of vdev-id as vdev-id can change after reboot.
  raidz2-4                  DEGRADED    0    0    0
  raidz2-4                  DEGRADED    0    0    0
  scsi-35000c500a7da67cb  ONLINE      0    0    0
  scsi-35000c500a7da67cb  ONLINE      0    0    0
Line 445: Line 284:
  scsi-35000c500a7da740f  ONLINE      0    0    0
  scsi-35000c500a7da740f  ONLINE      0    0    0


In this case, we have to determine the vdev name of the new disk disk just got inserted with dmesg. Look for log that mentioning about an new disk
In this case, we have to determine the id name of the new disk disk just got inserted with dmesg. Look for log that mentioning about an new disk
  $ dmesg | tail
  $ dmesg -T | tail
  [14663327.192519] sd 0:0:38:0: ['''sdad'''] Spinning up disk...
  [7819794.080935] scsi 0:0:40:0: Power-on or device reset occurred
  [14663327.192756] sd 0:0:38:0: Attached scsi generic sg27 type 0
  [7819794.099111] sd 0:0:40:0: Attached scsi generic sg8 type 0
  [14663328.193173] ........................ready
  [7819794.100978] sd 0:0:40:0: [sdi] Spinning up disk...
  [14663352.681625] sd 0:0:38:0: ['''sdad'''] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[7819795.103622] ......................ready
  [14663352.681627] sd 0:0:38:0: [sdad] 4096-byte physical blocks
  [7819817.123255] sd 0:0:40:0: [sdi] 31251759104 512-byte logical blocks: (16.0 TB/14.5 TiB)
  [14663352.687268] sd 0:0:38:0: [sdad] Write Protect is off
  [7819817.123263] sd 0:0:40:0: [sdi] 4096-byte physical blocks
  [14663352.687273] sd 0:0:38:0: [sdad] Mode Sense: db 00 10 08
  [7819817.128478] sd 0:0:40:0: [sdi] Write Protect is off
  [14663352.690847] sd 0:0:38:0: [sdad] Write cache: enabled, read cache: enabled, supports DPO and FUA
  [7819817.128486] sd 0:0:40:0: [sdi] Mode Sense: df 00 10 08
  [14663352.732297] sd 0:0:38:0: [sdad] Attached SCSI disk
  [7819817.130308] sd 0:0:40:0: [sdi] Write cache: enabled, read cache: enabled, supports DPO and FUA
  [7819817.165231] sd 0:0:40:0: [sdi] Attached SCSI disk
 
Check if disk is properly recognized, the new disk should be at the bottom and doesn't have any partition
$ fdisk -l
 
$ cd /dev/disk/by-id
$ ls -ltr | grep sdi
lrwxrwxrwx. 1 root root  9 Feb  7 13:29 scsi-35000c500d7947833 -> ../../sdi


Once determine the name, we will start the resilvering process  
Once determine the name, we will start the resilvering process  
  $ zpool replace exa  scsi-35000c500a7db39db sdad
  $ zpool replace exa  scsi-35000c500a7db39db scsi-35000c500d7947833
  # scsi-35000c500a7db39db is the id of the failed disk obtained from zpool status
  # scsi-35000c500a7db39db is the id of the failed disk obtained from zpool status
  # sdad is the vdev-id of the new replacement disk determined above
  # scsi-35000c500d7947833 is the id of the new replacement disk determined above


== Disk LED light ==
== Disk LED light ==
Line 472: Line 319:
  $ ledctl locate_off=/dev/disk/by-id/scsi-35000c500a7d8137f  
  $ ledctl locate_off=/dev/disk/by-id/scsi-35000c500a7d8137f  
  $ ledctl locate=/dev/disk/by-id/scsi-35000c500a7d8137f  
  $ ledctl locate=/dev/disk/by-id/scsi-35000c500a7d8137f  
$ ledctl locate=/dev/disk/by-partlabel/zfs-c34473d19032c002
By vdev
By vdev
  # turn light on
  # turn light on
Line 489: Line 337:
  $ '''sudo ledctl normal=/dev/disk/by-id/<disk id>'''
  $ '''sudo ledctl normal=/dev/disk/by-id/<disk id>'''
  Example: $ '''sudo ledctl normal=/dev/disk/by-id/scsi-35000c500a7db39db'''
  Example: $ '''sudo ledctl normal=/dev/disk/by-id/scsi-35000c500a7db39db'''
== Check if pool is compressed ==
<source>
zfs get all | grep compression
</source>
== Check User Usage in ZFS ==
<source>
zfs userspace <pool name>
</source>





Latest revision as of 01:49, 20 November 2024

Introduction

ZFS - Zettabyte File System. It's the best for redundant data storage and one of the best ways for RAID setups.

Installation

  1. Install CentOS 7. Make sure it has access to internet.
  2. Create a Foreman Entry here
  3. Become root and run these commands. These will install all the necessary packages plus ZFS and also enable all the firewall rules necessary.
    • #!/bin/bash
      
      yum install epel-release -y
      yum update -y
      yum install puppet -y
      yum install sssd -y
      yum install nss-pam-ldapd -y
      yum install oddjob-mkhomedir -y
      systemctl start oddjobd
      systemctl enable oddjobd
      puppet agent -t
      yum install https://zfsonlinux.org/epel/zfs-release.el7_9.noarch.rpm -y
      yum install zfs -y
      
      systemctl start nfs
      systemctl enable nfs
      systemctl start zfs.target
      systemctl enable zfs.target
      systemctl start zfs-import-cache.service
      systemctl enable zfs-import-cache.service
      firewall-cmd --permanent --add-service=nfs
      firewall-cmd --permanent --add-service=mountd
      firewall-cmd --permanent --add-service=rpc-bind
      firewall-cmd --reload

How to Create the Zpool

  1. First determine which ones the SSD drives are because that will be the log and cache of the zfs filesystem. Use the ones that you didn't use for the OS.
    • lsblk -o NAME,SIZE,SERIAL,LABEL,FSTYPE
    • 2 x 240GB SSDs will be configured to be mirrors of each other for logging and 1 x 480GB SSD will be for cache.
  2. Lets now create the pool:
    • zpool create <options> <name-of-pool> raidz2 <hdd-1> <hdd-2> <hdd-3> <hdd-4> <hdd-5> <hdd-6>\
                                            raidz2 <hdd-7> <hdd-8> <hdd-9> <hdd-10> <hdd-11> <hdd-12>\
                                            raidz2 <hdd-13> <hdd-14> <hdd-15> <hdd-16> <hdd-17> <hdd-18>\
                                            raidz2 <hdd-19> <hdd-20> <hdd-21> <hdd-22> <hdd-23> <hdd-24>\
                                            raidz2 <hdd-25> <hdd-26> <hdd-27> <hdd-28> <hdd-29> <hdd-30>\
                                            raidz2 <hdd-31> <hdd-32> <hdd-33> <hdd-34> <hdd-35> <hdd-36>\
                                            log mirror <ssd-1> <ssd-2>
                                            cache <ssd-3>
  3. Here is an example of how to create a zpool:
    • zpool create -f exj raidz2 sdf sdg sdh sdi sdj sdk\
      		              raidz2 sdl sdm sdn sdo sdp sdq\
      		              raidz2 sdr sds sdt sdu sdv sdw\
      		              raidz2 sdx sdy sdz sdaa sdab sdac\
      		              raidz2 sdad sdae sdaf sdag sdah sdai\
      		              raidz2 sdaj sdak sdal sdam sdan sdao\
      		              log mirror sdc sdd\
      		              cache sde
  4. Once the zpool has been created, double check by using one of these commands
    • zfs list
      zpool status
  5. Here comes a weird part. Locating broken disks is hard with 'sd*' naming convention since it changes with every reboot. Therefore, we will change it to their id's.
    • zpool export <zpool-name>
      zpool import -d /dev/disk/by-id -aN
  6. Now, mount the zpool into a directory in the machine. We usually create a new directory called '/export/<zpool-name>'.
    • zfs set mountpoint=/export/<zpool-name> <zpool-name>
  7. For example
    • zfs set mountpoint=/export/exa exa
  8. Double check that the zpool mounted to the directory by checking the disk space in that directory
    • df -h /export/<zpool-name>
  9. Lastly, reboot the machine to see if the zpool will mount automatically. If it doesn't, once it turns on, run
    • modprobe zfs
      zpool import -a
  10. Then reboot again and it should remount itself.

Exporting the Zpool to the Cluster

  1. Add the rules on where to export the zpool
    • vim /etc/exports
  2. Then, add this inside. Replace <> with the respective information.
    • /export/<zpool-name>     10.20.0.0/16(rw,async,fsid=<unused-id>,no_subtree_check) \
                               169.230.26.0/24(rw,async,fsid=<unused-id>,no_subtree_check) \
                               169.230.90.0/24(rw,async,fsid=<unused-id>,no_subtree_check) \
                               169.230.91.0/24(rw,async,fsid=<unused-id>,no_subtree_check) \
                               169.230.92.0/24(rw,async,fsid=<unused-id>,no_subtree_check)
  3. For example
    • /export/exl     10.20.0.0/16(rw,async,fsid=547,no_subtree_check) \
                      169.230.26.0/24(rw,async,fsid=547,no_subtree_check) \
                      169.230.90.0/24(rw,async,fsid=547,no_subtree_check) \
                      169.230.91.0/24(rw,async,fsid=547,no_subtree_check) \
                      169.230.92.0/24(rw,async,fsid=547,no_subtree_check)
  4. Export the rules
    • exportfs -a
  5. In another machine, check that the rules are applied
    • showmount -e <machine-name>
  6. Then follow this guide to add this new machine to the puppet module

Tuning ZFS options

 # stores extended attributes as system attributes to improve performance
 $ zfs xattr=sa <zfs dataset name> 
 
 # Turn on ZFS lz4 compression.  Use this for compressible dataset such as many files with text 
 $ zfs set compression=lz4 <zfs dataset name> 
 
 # Turn off access time for improved disk performance (so that the OS doesn't write a new time every time a file is accessed)
 $ zfs set atime=off <zfs dataset name>
 NOTE: ZFS performance degrades tremendously when the zpool is over 80% used.  To avoid this, I have set a quota to 80% of the 248TB in qof/nfs-ex9.
 # To set a quota of 200TB on ZFS dataset:
 $ zfs set quota=200T <zfs dataset>
 # To remove a quota from a ZFS dataset:
 $ zfs set quota=none <zfs dataset>

By default, ZFS pools/mounts do not have ACLs active.

 # to active access control lists on a zpool
 $ sudo zfs set acltype=posixacl <pool name>

Checking Disk Health and Integrity

Print a brief summary of all pools:

zpool list

Print a detailed status of each disk and status of pool:

zpool status

Clear read errors on disk, if not anything serious:

zpool clear <pool_name>

Check data integrity, traverses all the data in the pool once and verifies that all blocks can be read:

zpool scrub <pool_name>

To stop scrub:

zpool scrub -s <pool_name>

recovery from accidental pool destruction

umount /mnt /mnt2
mdadm -S /dev/md125/dev/md126/dev/md127
sfdisk -d /dev/sda < sda.sfdisk
sfdisk -d /dev/sdb < sdb.sfdisk
sfdisk /dev/sda < sdb.sfdisk
mdadm --detail /dev/md127
mdadm -A -R /dev/md127/dev/sdb2/dev/sda2
mdadm /dev/md127 -a /dev/sda2
mdadm --detail /dev/md127
echo check > /sys/block/md127/md/sync_action
cat /proc/mdstat
mdadm --detail /dev/md126
mdadm -A -R /dev/md126/dev/sdb3/dev/sda3
mdadm /dev/md126 -a /dev/sda3
mdadm --detail /dev/md126
echo check > /sys/block/md126/md/sync_action
cat /proc/mdstat

Also switched the bios to boot from hd2 instead of hd1 (or something)

  • Recreate zpool with correct drives
  • Point an instance photorec at each of the wiped drives set to recover files of the following types: .gz, .solv (custom definition)


NOTE: If you destroyed your zpool with command 'zpool destroy', you can use the command 'zpool import' to view destroyed pools and recover the pool by doing 'zpool import <zpool name>'.

Troubleshooting

Panic!! The disk is full and I can't remove files!

If the zfs pool somehow gets completely filled up, to the point that "rm" no longer works, don't panic. The disk may seem full, but ZFS actually keeps a little bit of space free for internal operations (why rm is not one of these, I don't know). The amount of space reserved for this purpose is determined by the "spa_slop_shift" module parameter. You can find the value of this parameter @ /sys/module/zfs/parameters/spa_slop_shift

If you can't find the value of this parameter, you can calculate it from the ALLOC and FREE columns given by "zpool list", like so:

spa_slop_shift = floor(log2(ALLOC/FREE))

To free up a little bit more space on disk, you can increase the value of this parameter by one. Here's a real-world example of this, from when /nfs/exh filled up.

[root@nfs-exh ~]# df -h /nfs/exh
Filesystem           Size  Used Avail Use% Mounted on
nfs-exh:/export/exh  349T  349T   0B  100% /mnt/nfs/exh

[root@nfs-exh ~]# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
exh    524T   524T  95.9G        -         -    91%    99%  1.00x    ONLINE  -

[root@nfs-exh ~]# cat /sys/module/zfs/parameters/spa_slop_shift
12

[root@nfs-exh ~]# echo 13 > /sys/module/zfs/parameters/spa_slop_shift

[root@nfs-exh ~]# df -h /nfs/exh
Filesystem           Size  Used Avail Use% Mounted on
nfs-exh:/export/exh  349T  349T   22G 100% /mnt/nfs/exh

Once you've gotten a foothold on the disk and made some space, you should revert the spa_slop_shift parameter back to its original value.

zpool destroy : Failed to unmount <device> - device busy

The help text will advise you to check lsof or fuser, but really what you need to do is stop the nfs service

 systemctl stop nfs
 umount /export/ex*
 zpool destroy ...
 zpool create ...
 zpool ...
 ...
 systemctl start nfs

zpool missing after reboot

This is due to zfs-import-cache failed to start at boot time.

# check
$ systemctl status zfs-import-cache.service
# enable at boot time
$ systemctl enable zfs-import-cache.service

Example: Fixing degraded pool, replacing faulted disk

On Feb 22, 2019, one of nfs-ex9's disks became faulty.

-bash-4.2$ zpool status
pool: ex9
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
	repaired.
  scan: scrub canceled on Fri Feb 22 11:31:25 2019
config:
         raidz2-5                                      DEGRADED     0     0     0
sdae                                        ONLINE       0     0     0
sdaf                                        ONLINE       0     0     0
sdag                                        ONLINE       0     0     0
sdah                                        FAULTED     18     0     0  too many errors
sdai                                        ONLINE       0     0     0
sdaj                                        ONLINE       0     0     0


I did the following:

-bash-4.2$ sudo zpool offline ex9 sdb

Then I went to the server room to see that disk 1 still had a red light due to the fault. I pulled the disk out. Inserted a fresh one of the same brand, a Seagate Exos X12. The server detected the new disk and set the disk name as /dev/sdb, just like the one I just pulled out. Finally, I did the following command.

-bash-4.2$ sudo zpool replace ex9 /dev/sdah
-bash-4.2$ zpool status
 pool: ex9
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Tue Mar 19 14:06:33 2019
1.37G scanned out of 51.8T at 127M/s, 118h33m to go
37.9M resilvered, 0.00% done
.
.
.
	  raidz2-5                                      DEGRADED     0     0     0
   sdae                                        ONLINE       0     0     0
   sdaf                                        ONLINE       0     0     0
   sdag                                        ONLINE       0     0     0
   replacing-3                                 DEGRADED     0     0     0
     old                                       FAULTED     18     0     0  too many errors
     sdah                                      ONLINE       0     0     0  (resilvering)
   sdai                                        ONLINE       0     0     0
   sdaj                                        ONLINE       0     0     0

Resilvering is the process of a disk being rebuilt from its parity group. Once it is finished, you should be good to go again.

Replace disk by disk ids

For zayin/nfs-exa, some of the disks are named by id instead of the vdev-id. It is recommended to use id instead of vdev-id as vdev-id can change after reboot.

raidz2-4                  DEGRADED     0     0     0
scsi-35000c500a7da67cb  ONLINE       0     0     0
scsi-35000c500a7daa34f  ONLINE       0     0     0
scsi-35000c500a7db39db  FAULTED      0     0     0  too many errors
scsi-35000c500a7da6b97  ONLINE       0     0     0
scsi-35000c500a7da265b  ONLINE       0     0     0
scsi-35000c500a7da740f  ONLINE       0     0     0

In this case, we have to determine the id name of the new disk disk just got inserted with dmesg. Look for log that mentioning about an new disk

$ dmesg -T | tail
[7819794.080935] scsi 0:0:40:0: Power-on or device reset occurred
[7819794.099111] sd 0:0:40:0: Attached scsi generic sg8 type 0
[7819794.100978] sd 0:0:40:0: [sdi] Spinning up disk...
[7819795.103622] ......................ready
[7819817.123255] sd 0:0:40:0: [sdi] 31251759104 512-byte logical blocks: (16.0 TB/14.5 TiB)
[7819817.123263] sd 0:0:40:0: [sdi] 4096-byte physical blocks
[7819817.128478] sd 0:0:40:0: [sdi] Write Protect is off
[7819817.128486] sd 0:0:40:0: [sdi] Mode Sense: df 00 10 08
[7819817.130308] sd 0:0:40:0: [sdi] Write cache: enabled, read cache: enabled, supports DPO and FUA
[7819817.165231] sd 0:0:40:0: [sdi] Attached SCSI disk

Check if disk is properly recognized, the new disk should be at the bottom and doesn't have any partition

$ fdisk -l
$ cd /dev/disk/by-id
$ ls -ltr | grep sdi
lrwxrwxrwx. 1 root root  9 Feb  7 13:29 scsi-35000c500d7947833 -> ../../sdi

Once determine the name, we will start the resilvering process

$ zpool replace exa  scsi-35000c500a7db39db scsi-35000c500d7947833
# scsi-35000c500a7db39db is the id of the failed disk obtained from zpool status
# scsi-35000c500d7947833 is the id of the new replacement disk determined above

Disk LED light

Identify failed disk by LED light

By disk_id

# turn light off
$ ledctl locate_off=/dev/disk/by-id/<disk_id> 
# turn light on
$ ledctl locate=/dev/disk/by-id/<disk_id>
Example
$ ledctl locate_off=/dev/disk/by-id/scsi-35000c500a7d8137f 
$ ledctl locate=/dev/disk/by-id/scsi-35000c500a7d8137f 
$ ledctl locate=/dev/disk/by-partlabel/zfs-c34473d19032c002

By vdev

# turn light on
$ ledctl locate_off=/dev/<vdev>
# turn light on
$ ledctl locate=/dev/disk/<vdev>
Example 
$ ledctl locate_off=/dev/sdaf
$ ledctl locate=/dev/sdaf

Reset light from LED light glitch

For qof/nfs-ex9, we had an issue with the disk LED for /dev/sdah still showing up red despite the resilvering occurring. To return the disk LED to a normal status, issue the following command:

$ sudo ledctl normal=/dev/<disk vdev id>
Example: $ sudo ledctl normal=/dev/sdah

or for zayin/nfs-exa, disk are identify by id

$ sudo ledctl normal=/dev/disk/by-id/<disk id>
Example: $ sudo ledctl normal=/dev/disk/by-id/scsi-35000c500a7db39db

Check if pool is compressed

zfs get all | grep compression

Check User Usage in ZFS

zfs userspace <pool name>