how do I add a new disk to ZFS and make it available existing mountpoints if the current pool is root pool?
Solution 1
Attach the second disk as a mirror of the first, wait for resilver, remove the first disk, set the properties to autoexpand. Don't forget to setup boot code or anything like that. Example:
zpool attach rpool olddisk newdisk
...wait, check using zpool status rpool
zpool detach rpool olddisk
zpool set autoexpand=on rpool
Solution 2
You cannot "expand" the rpool size by appending one disk to another (raid 0), as previously mentioned, however as Chris S mentioned below, you could attach a larger disk as a mirror, then, once the data is sync'd (resliver complete), detach the smaller disk. (oops now I see Chris's response too)
Here is a process to mirror the root disk... http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-oracle-solaris-11-express
Follow that except that where they do:
prtvtoc /dev/rdsk/c7t0d0s0 | fmthard -s - /dev/rdsk/c7t1d0s0
... you will want to run format and make the size of slice 0 larger, probably the whole disk
# format /dev/rdsk/c4t1d0s0
(I will not go into great detail on the interactive format command)
# zpool attach rpool c4t0d0s0 c4t1d0s0
# zpool status rpool
WAIT UNTIL IT SAYS "resilver completed" (keep checking zpool status rpool
)
MAKE SURE YOU CAN BOOT TO THE SECOND DISK
Then detach the smaller rpool mirror and reboot, make sure you can boot again.
# zpool detach rpool c4t0d0s0
PROFIT!?
REFERENCE: http://docs.oracle.com/cd/E19963-01/html/821-1448/gjtuk.html#gjtui
Previous Answer:
After creating the pool using the command he specified:
zpool create mypool c4t1d0
Create a filesystem, for example:
zfs create mypool/home
... copy the data to the new disk ... (re)move the data from rpool disk, then set the mountpoint to a proper location, such as:
zfs set mountpoint=/export/home mypool/home
That is, of course, assuming that /export/home is where all the space is being used. You may have to do this in "single user" mode, or create a user with a home directory that is not in /export/home
to complete this.
On a side note, your zfs list output looks funky, like it is missing something. rpool/ROOT
is showing 101GB used, but the filesystems under it are only showing about 12.5GB REF, and far less USED. Do you by chance have other boot environments under rpool/ROOT that you "trimmed out" of your zfs list output? could you maybe destroy those bootenv's or at least the zfs filesystems to regain the space used in rpool/ROOT?
~tommy
Solution 3
Yes, I believe your only option is to create a new pool using the second disk. The only thing you can do with the rpool is mirror the disk - which won't make more space available. The rpool doesn't support striping, due to the difficulties it would pose with booting.
zpool create mypool c4t1d0
RaamEE
I'm an Electrical & Computers engineer by title and educator of science and technology in my spare time. I volunteer in the SpaceIL education team, mentor FLL teams and judge FLL research projects and dabble in additional software and hardware projects when I have the time.
Updated on September 18, 2022Comments
-
RaamEE almost 2 years
My S11 server has the following configuration:
disk #1 is used for rpool, which is the root pool I want to add disk #2 to increase the size available for the already mounted folders, but I can't add the disk to the existing rpool because its the root-pool.
Is there a way to make the new disk available for the "/" folder? Is my only option to create a new zpool and mount it under a new folder?
Thanks.
RaamEE
root@raamee:~# zpool status pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c4t0d0s0 ONLINE 0 0 0
root@raamee:~# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 130G 4.18G 4.59M /rpool rpool/ROOT 101G 4.18G 31K legacy rpool/ROOT/S11-GA 152M 4.18G 7.33G / rpool/ROOT/S11-GA/var 17.4M 4.18G 5.20G /var rpool/VARSHARE 180K 4.18G 180K /var/share rpool/dump 8.25G 4.43G 8.00G - rpool/guests 31K 4.18G 31K /guests rpool/scratch 2.52M 4.18G 2.52M /scratch rpool/swap 20.6G 4.81G 20.0G -
root@raamee:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c4t0d0 <FUJITSU-MBB2147RCSUN146G-0505 cyl 17845 alt 2 hd 255 sec 63> /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0 /dev/chassis/SYS/HD0/disk 1. c4t1d0 <FUJITSU-MBB2147RCSUN146G-0505-136.73GB> /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0 /dev/chassis/SYS/HD1/disk
-
Philip over 11 yearsSorry to be pedantic, but you can create a mirror of the rpool with a larger disk, then remove the smaller disk, and grow the rpool - this effectively increases the size of the rpool, which your first sentence says is not possible.
-
TommyTheKid over 11 yearsChris S: You are correct, I hadn't thought of that. You should post the "correct" answer ;)