Linux Commands for Resizing Volumes

From CAC Documentation wiki
Revision as of 14:40, 27 March 2021 by Srl6 (talk | contribs) (updated for OpenStack, based on the old "Resizing volumes" page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

At times you may need to resize a volume in Linux. The basic procedure for extending a non-root volume is covered in the Volumes page for Red Cloud, under Extending an Existing Volume. The present page covers trickier situations in which the Linux command line is being used to extend a root volume (i.e., the volume containing the root directory (/) and the OS), or to create an exceptionally large volume with LVM.

In the following instructions, if the volume is a Red Cloud volume, it is assumed to be attached to an active instance so that you can run Linux commands on it. However, it cannot be attached as the current root volume of an instance.

If the volume is the current root volume, you must:

  • shelve the instance (to flush its files back to disk);
  • take a snapshot of the volume (so it will not be deleted in the next step);
  • delete the instance (so the properties of the volume can now be changed); and
  • attach the volume to a different (or new) Linux instance.

Then you can mount the volume and resize the partition as indicated below. After that, you can launch a new instance from your resized root volume.

Root Volumes

A common use case for expanding the size of a root volume is to increase the storage for applications. Once you have installed the desired applications on the instance's new, larger root volume, you may wish to take a snapshot of the instance. In OpenStack, the snapshot will automatically be registered as an image, and you can boot new instances from this image very conveniently.

To extend the root partition (assuming the volume is attached to /dev/vdc):

sudo fdisk /dev/vdc 
# [interactive commands to delete root partition (!) and re-create it larger: 
# p, d, p, n, p, 1, defaults... p, w]
sudo sudo e2fsck -f /dev/vdc1
sudo growpart /dev/vdc 1
sudo resize2fs /dev/vdc1

Note that using growpart is not always necessary, but can help in the event that resize2fs indicates the partition is already at maximum size.

Here is a transcript of output from an example procedure:

brandon@euca-128-84-11-149:~$ sudo fdisk /dev/vdc

Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/vdc: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x61e75f2d

Device     Boot Start      End  Sectors Size Id Type
/dev/vdc1  *     2048 20971519 20969472  10G 83 Linux

Command (m for help): d
Selected partition 1
Partition 1 has been deleted.

Command (m for help): p
Disk /dev/vdc: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x61e75f2d

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 11
Value out of range.
Partition number (1-4, default 1): 1
First sector (2048-62914559, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-62914559, default 62914559):

Created a new partition 1 of type 'Linux' and of size 30 GiB.

Command (m for help): p
Disk /dev/vdc: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x61e75f2d

Device     Boot Start      End  Sectors Size Id Type
/dev/vdc1        2048 62914559 62912512  30G 83 Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

brandon@euca-128-84-11-149:~$ sudo resize2fs /dev/vdc1
resize2fs 1.42.12 (29-Aug-2014)
Please run 'e2fsck -f /dev/vdc1' first.

brandon@euca-128-84-11-149:~$ sudo e2fsck -f /dev/vdc1
e2fsck 1.42.12 (29-Aug-2014)
/dev/vdc1: recovering journal
Clearing orphaned inode 475259 (uid=108, gid=116, mode=0100664, size=2379)
Clearing orphaned inode 475232 (uid=108, gid=116, mode=0100664, size=2379)
Clearing orphaned inode 475249 (uid=108, gid=116, mode=0100664, size=2379)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (578868, counted=578802).
Fix<y>? yes
Free inodes count wrong (344343, counted=344342).
Fix<y>? yes

/dev/vdc1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/vdc1: 311018/655360 files (6.2% non-contiguous), 2042382/2621184 block

Logical Volume Manager (LVM)

Users can create partitions larger than 15TB volume limit by employing a logical volume manager within their Linux instance. See this LVM tutorial for an example procedure and useful LVM commands; here is a condensed summary of using this tutorial to create an LVM volume and requisite LVM entities to be used as a /home partition for users' home directories which may be increased in size in the future:

fdisk /dev/vdb # Verify this volume looks right (e.g. right size)
apt-get install lvm2 # If not already installed
pvcreate /dev/vdb
pvdisplay # check it shows up
vgcreate myproject_myvolgrp /dev/vdb
lvcreate -L 1090000 -n vol01 myproject_myvolgrp
lvdisplay  # check it shows up
vgdisplay # check free space in volume group, for instance
mkfs.ext4 -m 0 /dev/myproject_myvolgrp/vol01
emacs /etc/fstab # map /dev/myproject_myvolgrp/vol01 to /home

Should you need to detach volumes associated with your LVM group, for instance, to migrate them to a different instance, you can use the procedure illustrated below. This is just attaching all requisite volumes, running 'pvscan', and finally 'lvscan':

root@ubuntu:/# pvscan
  PV /dev/vdb   VG myproject_myvolgrp   lvm2 [1.07 TiB / 35.55 GiB free]
  Total: 1 [1.07 TiB] / in use: 1 [1.07 TiB] / in no VG: 0 [0   ]
root@ubuntu:/# lvscan
  ACTIVE            '/dev/myproject_myvolgrp/vol01' [1.04 TiB] inherit

If all looks well, then you should be able to mount the volume at this point.