Table of Contents
Solaris ZFS
Summary: How to with Solaris it's ZFS filesystem.
Date: Around 2012
Refactor: 16 April 2025: Checked links and formatting.
ZFS is becoming Solaris it's main file system. Nowadays, you cannot only use it for your data disks but also to boot from. Thanks to it's flexibility it is not only used in Solaris, but more and more used by major players in the market. According to the guys from Nexenta ZFS is already bigger than NetApp's OnTap and EMC's Isilon combined. This article shows hand-on use of ZFS on Solaris systems.
Note that all the commands listed here are tested on Solaris 10 update 8.
Create a ZFS Pool and Filesystem
Creating a ZFS pool is quite simple provided you have a spare disk. Find the disk using format
(exit format with ctrl+c):
sjoerd@solarisbox:~$ sudo format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <IBM-ServeRAID-MR10M-1.40-1.63TB> /pci@2,0/pci1014,308@0/pci1014,379@0/sd@0,0 1. c4t3d0 <DEFAULT cyl 8749 alt 2 hd 255 sec 63> /pci@1,0/pci1014,308@0/pci1014,363@0/sd@3,0 Specify disk (enter its number):
Then create the ZFS pool like this:
zpool create appl c3t0d0
Then create filesystems like this:
zfs create appl/appldata zfs create appl/appldata/acp zfs create appl/appldata/tst zfs create appl/application
Set Poperties on a ZFS Filesystem
This is a collection of properties, including quota and reservations that can be set on a filesystem:
zfs set quota=30G appl/application/tst/app1 zfs set reservation=30G appl/application/tst/app1 zfs set recordsize=128K appl/application/tst/app1 zfs set mountpoint=/appl/application/tst/app1 appl/application/tst/app1 zfs set sharenfs=off appl/application/tst/app1 zfs set checksum=on appl/application/tst/app1 zfs set compression=off appl/application/tst/app1 zfs set atime=on appl/application/tst/app1 zfs set devices=on appl/application/tst/app1 zfs set exec=on appl/application/tst/app1 zfs set setuid=on appl/application/tst/app1 zfs set readonly=off appl/application/tst/app1 zfs set snapdir=hidden appl/application/tst/app1 zfs set aclmode=groupmask appl/application/tst/app1 zfs set aclinherit=restricted appl/application/tst/app1 zfs set shareiscsi=off appl/application/tst/app1 zfs set xattr=on appl/application/tst/app1
See All Properties Of a ZFS Filesystem
Use the zfs get all command to see all properties of a ZFS filesystem:
sjoerd@solarisbox:~$ zfs get all appl NAME PROPERTY VALUE SOURCE appl type filesystem - appl creation Wed Apr 3 9:56 2013 - appl used 697G - appl available 941G - appl referenced 14.8G - appl compressratio 1.00x - appl mounted yes - appl quota none default appl reservation none default appl recordsize 128K local appl mountpoint /appl local appl sharenfs off local appl checksum on local appl compression off local appl atime on local appl devices on local appl exec on local appl setuid on local appl readonly off local appl zoned off default appl snapdir hidden local appl aclmode groupmask local appl aclinherit restricted local appl canmount on default appl shareiscsi off local appl xattr on local appl copies 1 default appl version 4 - appl utf8only off - appl normalization none - appl casesensitivity sensitive - appl vscan off default appl nbmand off default appl sharesmb off default appl refquota none default appl refreservation none default appl primarycache all default appl secondarycache all default appl usedbysnapshots 0 - appl usedbydataset 14.8G - appl usedbychildren 682G - appl usedbyrefreservation 0 -
ZFS Swap
I wanted to create my swap on a different device so I installed Solaris without swap. This is how to create swap on a different disk:
# swap -l No swap devices configured # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 5740 alt 2 hd 255 sec 63> /pci@0,0/pci15ad,7a0@15/pci15ad,1976@0/sd@0,0 1. c1t1d0 <DEFAULT cyl 11485 alt 2 hd 255 sec 63> /pci@0,0/pci15ad,7a0@15/pci15ad,1976@0/sd@1,0 2. c1t2d0 <DEFAULT cyl 26106 alt 2 hd 255 sec 63> /pci@0,0/pci15ad,7a0@15/pci15ad,1976@0/sd@2,0 Specify disk (enter its number): # zpool create swappool c1t1d0 # zpool list swappool NAME SIZE ALLOC FREE CAP HEALTH ALTROOT swappool 87.5G 97K 87.5G 0% ONLINE - # zfs create -V 80G swappool/swap # swap -a /dev/zvol/dsk/swappool/swap # swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/swappool/swap 181,2 8 167772152 167772152
Note that you need to create a volume (-V), otherwise you'll get a message like this: “/swappool/swap” is not valid for swapping. It must be a block device or a regular file with the “save user text on execution” bit set.
Use Scripts to Save and Restore Filesystem Overview And Settings
Use the (basic) script below to create backups of your ZFS settings. You might find these useful in case of disaster recovery or (if combined with a backup solution) as change tracking tool. There are three files creates, zfs_all keeps track of all settings and is mostly useful as documentation. Zfs_fs will provide you with a list of all filesystems and zfs_changes will give you all settings that are “local” defined, which means they have been modified from the default.
#!/bin/bash # HOSTNAME=$(hostname -s) BASEDIR=/root/bcp BACKUPDIR=${BASEDIR}/${HOSTNAME} PATH=${PATH}:/sbin:/usr/sbin # # Save custom zfs-properties # zfs get all > ${BACKUPDIR}/zfs_all.save zfs list > ${BACKUPDIR}/zfs_fs.save zfs get all | awk '/local/ {print $0}' > ${BACKUPDIR}/zfs_changes.save exit 0
The created files can be used to restore the filesystems in case of a new disk or a new system. First create the ZFS pool where the filesystems will be created in:
zpool create appl c3t0d0
Then create the ZFS filesystems, this command will echo the commands so they can be copy/pasted into the commandline:
while read line; do echo $line| grep -v '^NAME' | grep -v 'swap' |grep -v 'appl '|awk '{ print "zfs create "$1 }'; done < ${BACKUPDIR}/zfs_fs.save
Note that 'appl ' (with a space at the end) has been removed, this is the pool and already created. Also swap is removed, this created later on.
Now set the reservations and quota for the filesystems:
while read line; do echo $line | awk '{ print "zfs set "$2"="$3" "$1 }'; done < ${BACKUPDIR}/zfs_changes.save
Now manually create the required SWAP, after you've commented out the current ZFS swap in /etc/vfstab:
zfs create -V 20gb appl/swap swap -a /dev/zvol/dsk/appl/swap swap -l
Create and Manage ZFS NFS Shares
Create ZFS NFS Share
ZFS has a very easy way of creating NFS shares. I have seen some troubleshooting with this so this description is the extended one in case you're on a new system with no NFS shares yet. The steps are:
- Create and set correct permissions on /etc/dfs/sharetab
- Set the correc sharing properties on the filesystem you want to share
- Enable the shares
Sharetab
Creating sharetab and setting the correct permissions is quite easy:
touch /etc/dfs/sharetab chmod 444 /etc/dfs/sharetab
Set Sharing Properties
If you want to open all access to the filesystem you can simply set the sharenfs property to on:
zfs set sharenfs=on appl/home
Or you can allow specific hosts with specific permissions:
zfs set sharenfs=rw=host1:host2:host3,root=host1:host2:host3,ro=host4 appl/home
These are the settings we use:
zfs set sharenfs=rw=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain,root=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain appl/appl_tmp zfs set sharenfs=rw=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain,root=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain appl/home
Enable The Share
Enabling all shares can be done with:
zfs share -a
Check the ZFS NFS Shares
You can check the ZFS NFS shares with various commands:
Show shares:
sjoerd@solarisbox02:/$ share - /appl/appl_tmp sec=sys,rw=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain,root=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain "" - /appl/home sec=sys,rw=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain,root=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain ""
Show shares:
sjoerd@solarisbox02:/$ showmount -e export list for solarisbox02: /appl/appl_tmp solarisbox01.appl.domain,solarisbox02.appl.domain,solarisbox03.appl.domain,solarisbox.appl.domain,solarisboxacc02.appl.domain,solarisboxacc03.appl.domain /appl/home solarisbox01.appl.domain,solarisbox02.appl.domain,solarisbox03.appl.domain,solarisbox.appl.domain,solarisboxacc02.appl.domain,solarisboxacc03.appl.domain
Show shares:
sjoerd@solarisbox02:/$ cat /etc/dfs/sharetab /appl/appl_tmp - nfs sec=sys,rw=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain,root=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain /appl/home - nfs sec=sys,rw=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain,root=solarisbox01.appl.domain:solarisbox02.appl.domain:solarisbox03.appl.domain:solarisbox.appl.domain:solarisboxacc02.appl.domain:solarisboxacc03.appl.domain
Show connected clients:
sjoerd@solarisbox02:/$ showmount solarisbox.appl.domain solarisboxacc02.appl.domain solarisboxacc03.appl.domain solarisbox01.appl.domain solarisbox03.appl.domain
ZFS Pool Troubleshooting
While working on restoring Solaris with Netbackup BMR I ran into a lot of issues. One of the issues we had was that the old ZFS pools were not correctly removed from the disk, which resulted in errors like:
- Insufficient replicas
- corrupted data
- UNAVAIL state for the pool
Destroying and recreating also failed because the pool was not really there anymore, so destroying it failed as well.
Eventually, this workaround would work to go to a woring state (and really later on Symantec provided a patch):
- After the Netbackup BMR restore boot the client
- Rename the /etc/zfs/pool.cache
mv /etc/zfs/pool.cache /etc/zfs/_old_pool.cache
- Reboot the client again
- Issue the command
zpool import
- This will show you the name as well as the unique is of the pool
- Issue the command again using the -f switch and the id of the pool:
zpool import -f 6059903200758121021