Zfs: Difference between revisions
Jump to navigation
Jump to search
(added zfs/zpool commands) |
|||
Line 1: | Line 1: | ||
here is how to work with zfs on | here is how to work with zfs. | ||
== Beginning ZFS instances == | |||
There are only two commmands to interact with ZFS. | |||
zpool: used to create a ZFS vdev (virtual device). vdevs are composed of physical devices. | |||
zfs: used to create/interact with a ZFS dataset. ZFS datasets are akin to logical volumes | |||
# zpool creation syntax | |||
zpool create <poolname> <vdev(s)> | |||
# Create a zpool of six raidz2 vdevs, each with six drives. Includes two SSDs to used as a mirrored SLOG and one SSD as an L2ARC read cache. (example commmand was run on qof) | |||
zpool create ex9 raidz2 sda sdb sdc sdd sde sdf raidz2 sdg sdh sdi sdj sdk sdl raidz2 sdm sdn sdo sdp sdq sdr raidz2 sds sdt sdu sdv sdw sdx raidz2 sdy sdz sdaa sdab sdac sdad raidz2 sdae sdaf sdag sdah sdai sdaj log mirror ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN cache ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN | |||
[root@qof ~]# zpool status | |||
pool: ex9 | |||
state: ONLINE | |||
scan: none requested | |||
config: | |||
NAME STATE READ WRITE CKSUM | |||
ex9 ONLINE 0 0 0 | |||
raidz2-0 ONLINE 0 0 0 | |||
sda ONLINE 0 0 0 | |||
sdb ONLINE 0 0 0 | |||
sdc ONLINE 0 0 0 | |||
sdd ONLINE 0 0 0 | |||
sde ONLINE 0 0 0 | |||
sdf ONLINE 0 0 0 | |||
raidz2-1 ONLINE 0 0 0 | |||
sdg ONLINE 0 0 0 | |||
sdh ONLINE 0 0 0 | |||
sdi ONLINE 0 0 0 | |||
sdj ONLINE 0 0 0 | |||
sdk ONLINE 0 0 0 | |||
sdl ONLINE 0 0 0 | |||
raidz2-2 ONLINE 0 0 0 | |||
sdm ONLINE 0 0 0 | |||
sdn ONLINE 0 0 0 | |||
sdo ONLINE 0 0 0 | |||
sdp ONLINE 0 0 0 | |||
sdq ONLINE 0 0 0 | |||
sdr ONLINE 0 0 0 | |||
raidz2-3 ONLINE 0 0 0 | |||
sds ONLINE 0 0 0 | |||
sdt ONLINE 0 0 0 | |||
sdu ONLINE 0 0 0 | |||
sdv ONLINE 0 0 0 | |||
sdw ONLINE 0 0 0 | |||
sdx ONLINE 0 0 0 | |||
raidz2-4 ONLINE 0 0 0 | |||
sdy ONLINE 0 0 0 | |||
sdz ONLINE 0 0 0 | |||
sdaa ONLINE 0 0 0 | |||
sdab ONLINE 0 0 0 | |||
sdac ONLINE 0 0 0 | |||
sdad ONLINE 0 0 0 | |||
raidz2-5 ONLINE 0 0 0 | |||
sdae ONLINE 0 0 0 | |||
sdaf ONLINE 0 0 0 | |||
sdag ONLINE 0 0 0 | |||
sdah ONLINE 0 0 0 | |||
sdai ONLINE 0 0 0 | |||
sdaj ONLINE 0 0 0 | |||
logs | |||
mirror-6 ONLINE 0 0 0 | |||
ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN ONLINE 0 0 0 | |||
ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN ONLINE 0 0 0 | |||
cache | |||
ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN ONLINE 0 0 0 | |||
== situation == | == situation == |
Revision as of 21:02, 2 July 2018
here is how to work with zfs.
Beginning ZFS instances
There are only two commmands to interact with ZFS.
zpool: used to create a ZFS vdev (virtual device). vdevs are composed of physical devices. zfs: used to create/interact with a ZFS dataset. ZFS datasets are akin to logical volumes
# zpool creation syntax zpool create <poolname> <vdev(s)> # Create a zpool of six raidz2 vdevs, each with six drives. Includes two SSDs to used as a mirrored SLOG and one SSD as an L2ARC read cache. (example commmand was run on qof) zpool create ex9 raidz2 sda sdb sdc sdd sde sdf raidz2 sdg sdh sdi sdj sdk sdl raidz2 sdm sdn sdo sdp sdq sdr raidz2 sds sdt sdu sdv sdw sdx raidz2 sdy sdz sdaa sdab sdac sdad raidz2 sdae sdaf sdag sdah sdai sdaj log mirror ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN cache ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN [root@qof ~]# zpool status pool: ex9 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM ex9 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 sdg ONLINE 0 0 0 sdh ONLINE 0 0 0 sdi ONLINE 0 0 0 sdj ONLINE 0 0 0 sdk ONLINE 0 0 0 sdl ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 sdm ONLINE 0 0 0 sdn ONLINE 0 0 0 sdo ONLINE 0 0 0 sdp ONLINE 0 0 0 sdq ONLINE 0 0 0 sdr ONLINE 0 0 0 raidz2-3 ONLINE 0 0 0 sds ONLINE 0 0 0 sdt ONLINE 0 0 0 sdu ONLINE 0 0 0 sdv ONLINE 0 0 0 sdw ONLINE 0 0 0 sdx ONLINE 0 0 0 raidz2-4 ONLINE 0 0 0 sdy ONLINE 0 0 0 sdz ONLINE 0 0 0 sdaa ONLINE 0 0 0 sdab ONLINE 0 0 0 sdac ONLINE 0 0 0 sdad ONLINE 0 0 0 raidz2-5 ONLINE 0 0 0 sdae ONLINE 0 0 0 sdaf ONLINE 0 0 0 sdag ONLINE 0 0 0 sdah ONLINE 0 0 0 sdai ONLINE 0 0 0 sdaj ONLINE 0 0 0 logs mirror-6 ONLINE 0 0 0 ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN ONLINE 0 0 0 ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN ONLINE 0 0 0 cache ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN ONLINE 0 0 0
situation
zpool status zfs list zfs get all
mount after reboot
zfs set mountpoint=/export/db2 db2
when you put in a new disk
fdisk -l
to see what is new
sudo zpool create -f /srv/db3 raidz2 /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj /dev/sdak /dev/sdal sudo zpool add -f /srv/db3 raidz2 /dev/sdam /dev/sdan /dev/sdao /dev/sdap /dev/sdaq /dev/sdar /dev/sdas /dev/sdat /dev/sdau /dev/sdav /dev/sdaw /dev/sdax
zfs unmount db3
zfs mount db3
latest
zpool create -f db3 raidz2 /dev/sdy /dev/sdz /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj zpool add -f db3 raidz2 /dev/sdak /dev/sdal /dev/sdam /dev/sdan /dev/sdao /dev/sdap /dev/sdaq /dev/sdar /dev/sdas /dev/sdat /dev/sdau /dev/sdav
zpool create -f db4 raidz2 /dev/sdax /dev/sday /dev/sdaz /dev/sdba /dev/sdbb /dev/sdbc /dev/sdbd /dev/sdbe /dev/sdbf /dev/sdbg /dev/sdbh /dev/sdbi zpool add -f db4 raidz2 /dev/sdbj /dev/sdbk /dev/sdbl /dev/sdbm /dev/sdbn /dev/sdbo /dev/sdbp /dev/sdbq /dev/sdbr /dev/sdbs /dev/sdbt /dev/sdbu
Fri Jan 19 2018
zpool create -f db5 raidz2 /dev/sdbw /dev/sdbx /dev/sdby /dev/sdbz /dev/sdca /dev/sdcb /dev/sdcc /dev/sdcd /dev/sdce /dev/sdcf /dev/sdcg /dev/sdch zpool add -f db5 raidz2 /dev/sdci /dev/sdcj /dev/sdck /dev/sdcl /dev/sdcm /dev/sdcn /dev/sdco /dev/sdcp /dev/sdcq /dev/sdcr /dev/sdcs /dev/sdct zfs mount db5
Wed Jan 24 2018
On tsadi
zpool create -f ex1 mirror /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae zpool add -f ex1 mirror /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj zpool create -f ex2 mirror /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj zpool add -f ex2 /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo zpool create -f ex3 mirror /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt zpool add -f ex3 mirror /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal zpool add -f ex4 mirror /dev/sdam /dev/sdan /dev/sdao
On tsadi
zpool create -f ex1 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror /dev/sdai /dev/sdaj zpool create -f ex2 mirror /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi mirror /dev/sdj /dev/sdk mirror /dev/sdl /dev/sdm mirror /dev/sdn /dev/sdo zpool create -f ex3 mirror /dev/sdp /dev/sdq mirror /dev/sdr /dev/sds mirro /dev/sdt /dev/sdu mirror /dev/sdv /dev/sdw mirror /dev/sdx /dev/sdy zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal mirror /dev/sdam mirror /dev/sdan /dev/sdao
On lamed
zpool create -f ex5 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror /dev/sdai /dev/sdaj zpool create -f ex6 mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf mirror /dev/sdg /dev/sdh mirror /dev/sdi /dev/sdj zpool create -f ex7 mirror /dev/sdk /dev/sdl mirror /dev/sdm /dev/sdn mirror /dev/sdo /dev/sdp mirror /dev/sdq /dev/sdr mirror /dev/sds /dev/sdt zpool create -f ex8 mirror /dev/sdu /dev/sdv mirror /dev/sdw /dev/sdx mirror /dev/sdy /dev/sdz
zfs mount
recovery from accidental pool destruction
umount /mnt /mnt2 mdadm -S /dev/md125/dev/md126/dev/md127
sfdisk -d /dev/sda < sda.sfdisk sfdisk -d /dev/sdb < sdb.sfdisk sfdisk /dev/sda < sdb.sfdisk
mdadm --detail /dev/md127 mdadm -A -R /dev/md127/dev/sdb2/dev/sda2 mdadm /dev/md127 -a /dev/sda2 mdadm --detail /dev/md127 echo check > /sys/block/md127/md/sync_action cat /proc/mdstat
mdadm --detail /dev/md126 mdadm -A -R /dev/md126/dev/sdb3/dev/sda3 mdadm /dev/md126 -a /dev/sda3 mdadm --detail /dev/md126 echo check > /sys/block/md126/md/sync_action cat /proc/mdstat
Also switched the bios to boot from hd2 instead of hd1 (or something)
- Recreate zpool with correct drives
- Point an instance photorec at each of the wiped drives set to recover files of the following types: .gz, .solv (custom definition)
NOTE: If you destroyed your zpool with command 'zpool destroy', you can use the command 'zpool import' to view destroyed pools and recover the pool by doing 'zpool import <zpool name>'.