Volume / Disk Management
Earlier, it was mentioned that the partition design needed to have some breathing room in each volume so that the file system inside can grow as needed. When the volumes were created during setup, the file systems were automatically expanded to fill the entire volume. We will now correct this by adding more "drives" to the system and then extend each logical volume to gain some breathing space.
Most logical volumes will be increased in size and then the file systems contained in them will be increased but not to the maximum amount.
This design will allow growth when needed and ensure that there will be time to add additional hard drives BEFORE they are needed which will keep the administrators from being stuck between a rock and a hard place! Nobody wants to lose a job because somebody did not estimate growth correctly or the budget did not allow for large capacity when the system first rolled out.
We started off with a 30 GB drive to hold these volumes and the changes below will use 24 GB.
Here are the planned adjustments for each
logical volume:
root = 5 GB to 6 GB
home = 0.5 GB to 1 GB
srv = 0.5 GB to 1GB
usr = 4 GB to 5 GB
var = 2 GB to 3 GB
tmp = 2 GB to 3 GB
opt = 0.5 GB to 1 GB
bak = 2 GB to 4 GB
Here are the planned adjustments for each
file system:
root = 5 GB
(no change)
home = 0.5 GB
(no change)
srv = 0.5 GB
(no change)
usr = 4 GB
(no change)
var = 2 GB
(no change)
tmp = 2 GB
(no change)
opt = 0.5 GB
(no change)
bak = 2 GB to 3 GB
Here is a graphical representation of what will be accomplished:
If we were to type
df -h right now, we should see something like this:
Code: Select all
df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 98M 1.2M 96M 2% /run
/dev/mapper/LVG-root 4.9G 1.3G 3.4G 27% /
/dev/mapper/LVG-usr 3.9G 2.2G 1.5G 59% /usr
tmpfs 486M 0 486M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 974M 125M 782M 14% /boot
/dev/mapper/LVG-var 2.0G 433M 1.4G 24% /var
/dev/mapper/LVG-home 488M 56K 452M 1% /home
/dev/mapper/LVG-srv 488M 24K 452M 1% /srv
/dev/mapper/LVG-tmp 2.0G 72K 1.8G 1% /tmp
/dev/mapper/LVG-opt 488M 24K 452M 1% /opt
/dev/mapper/LVG-bak 2.0G 24K 1.8G 1% /bak
tmpfs 98M 4.0K 98M 1% /run/user/1000
To get a list of volume paths to use in the next commands, use
lvscan to show your current volumes and their sizes.
Code: Select all
sudo lvscan
ACTIVE '/dev/LVG/root' [5.00 GiB] inherit
ACTIVE '/dev/LVG/home' [512.00 MiB] inherit
ACTIVE '/dev/LVG/srv' [512.00 MiB] inherit
ACTIVE '/dev/LVG/usr' [4.00 GiB] inherit
ACTIVE '/dev/LVG/var' [2.00 GiB] inherit
ACTIVE '/dev/LVG/tmp' [2.00 GiB] inherit
ACTIVE '/dev/LVG/bak' [2.00 GiB] inherit
ACTIVE '/dev/LVG/opt' [512.00 MiB] inherit
Type the following to set the exact size of the volume by specifying the end-result size you want:
Code: Select all
sudo lvextend -L6G /dev/LVG/root
sudo lvextend -L1G /dev/LVG/home
sudo lvextend -L1G /dev/LVG/srv
sudo lvextend -L5G /dev/LVG/usr
sudo lvextend -L3G /dev/LVG/var
sudo lvextend -L3G /dev/LVG/tmp
sudo lvextend -L1G /dev/LVG/opt
sudo lvextend -L4G /dev/LVG/bak
or you can grow each volume by the specified amount (the number after the plus sign):
Code: Select all
sudo lvextend -L+1G /dev/LVG/root
sudo lvextend -L+0.5G /dev/LVG/home
sudo lvextend -L+0.5G /dev/LVG/srv
sudo lvextend -L+1G /dev/LVG/usr
sudo lvextend -L+1G /dev/LVG/var
sudo lvextend -L+1G /dev/LVG/tmp
sudo lvextend -L+0.5G /dev/LVG/opt
sudo lvextend -L+2G /dev/LVG/bak
To see the new sizes, use
lvscan
Code: Select all
sudo lvscan
ACTIVE '/dev/LVG/root' [6.00 GiB] inherit
ACTIVE '/dev/LVG/home' [1.00 GiB] inherit
ACTIVE '/dev/LVG/srv' [1.00 GiB] inherit
ACTIVE '/dev/LVG/usr' [5.00 GiB] inherit
ACTIVE '/dev/LVG/var' [3.00 GiB] inherit
ACTIVE '/dev/LVG/tmp' [3.00 GiB] inherit
ACTIVE '/dev/LVG/opt' [1.00 GiB] inherit
ACTIVE '/dev/LVG/bak' [4.00 GiB] inherit
The last thing to do now is the actual growth of the file systems. We want to grow the existing file systems but only to a certain amount so we do not take up all the space in the volume. We want room for growth in the future so we have time to order and install new drives when needed.
If we need to increase space in
/usr at a later point, we can issue the following command without any downtime (we will automate this in a nifty script later):
We could continue to increase this particular file system all the way until we reach the limit of the volume which is 5 GB at the moment.
If we were to type
df -h right now, we should see something like this:
Code: Select all
df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 98M 1.2M 96M 2% /run
/dev/mapper/LVG-root 4.9G 1.3G 3.4G 27% /
/dev/mapper/LVG-usr 3.9G 2.2G 1.5G 59% /usr
tmpfs 486M 0 486M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 974M 125M 782M 14% /boot
/dev/mapper/LVG-var 2.0G 434M 1.4G 24% /var
/dev/mapper/LVG-home 488M 56K 452M 1% /home
/dev/mapper/LVG-srv 488M 24K 452M 1% /srv
/dev/mapper/LVG-tmp 2.0G 72K 1.8G 1% /tmp
/dev/mapper/LVG-opt 488M 24K 452M 1% /opt
/dev/mapper/LVG-bak 2.9G 6.0M 2.8G 1% /bak
tmpfs 98M 4.0K 98M 1% /run/user/1000
Remember,
df -h will tell you the size of the file system and
lvscan will tell you the size of the volumes where the file systems live in.
TIP: If you want to see everything in a specific block size, such as everything showing up in megabytes, you can use
df --block-size m
Swap File Management
If you do not specify a swap partition during the initial setup, a swap file system will be created automatically and point to a file called "swap.img" in the root filesystem.
Type
swapon --summary to see the status of the swap system:
Code: Select all
swapon --summary
Filename Type Size Used Priority
/swap.img file 1266684 0 -2
Specific needs vary but the current rule of thumb for Linux servers is to have a swap file 1/2 the size of the amount of RAM in your system.
Let's assume we want a 1GB swap file and we want it on the /bak partition. Keep in mind that it is normally recommended to keep the swapfile with the root partition for performance reasons. Here are the steps to setup this scenario:
- Make sure you have space on /bak by typing df -h /bak
- Run the following commands for the new swap file:
Code: Select all
sudo fallocate --length 1G /bak/swapfile1g
sudo chown root:root /bak/swapfile1g
sudo chmod 600 /bak/swapfile1g
sudo mkswap /bak/swapfile1g --label swap
sudo swapon /bak/swapfile1g
- Look at the current swap settings:
Code: Select all
swapon --summary
Filename Type Size Used Priority
/swap.img file 1266684 0 -2
/bak/swapfile1g file 1048572 0 -3
- Now disable the old swap file using these commands:
Code: Select all
sudo swapoff /swap.img
sudo rm /swap.img
- Remove the old swapfile from /etc/fstab and add the new one.
Remove:
Add:
- Since changes were made to fstab, check to make sure there are no typo errors:
NOTE: Warning about non-bind mount source for the swapfile is normal and can be ignored.
- Look at the current swap settings again:
Code: Select all
# swapon --summary
Filename Type Size Used Priority
/bak/swapfile1g file 1048500 0 -2
- Reboot the server and run the summary command again to verify that your /etc/fstab changes worked.
Adding More Hard Drives
For this exercise, we will add two additional hard drives. The addition of these drives are NOT necessary and this section can be skipped. The extra drives are only to demonstrate how to add additional hard drives to the system.
Adding more space in VMware or VirtualBox is easy. In this exercise, each drive will be added as a separate disk just as if we were to add a physical drive to a physical server. However, if you were to add a physical disk, it is bad practice to add a disk to an existing volume...when one drive fails, the entire volume goes offline.
vSphere Steps
- Shutdown and power off the server by typing sudo shutdown -P now
- In the vSphere client, right-click the Virtual Machine and choose Edit Settings.
- On the hardware tab, click the Add button and select Hard Disk. Click Next, choose "Create a new virtual disk", click Next, set the size to 10 GB, click Next, Next, Finish.
- Add another 15 GB disk using the same steps above and click OK to close the settings and allow VMware to process the changes.
VirtualBox Steps
- Shutdown and power off the server by typing sudo shutdown -P now
- In the VirtualBox Manager, select the Virtual Machine and click Settings.
- On the Storage tab, select Controller: SATA and click the Add new storage attachment button and select Add Hard Disk. Click Create new disk, VDI, Next, Fixed size, Next, give it a Name/Location/Size of 10 GB, click Create.
- Add another 15 GB disk using the same steps above and click OK to close the settings and allow VirtualBox to process the changes.
Collect information about the newly added drives.
- Start the server and connect using PuTTY and your administrator credentials.
- Make note of how much "Free PE / Size" you have in your logical volume group when using the vgdisplay command. When done adding the new drives, the free space listed here will increase by the amount added.
- Use pvdisplay which should show something similar to this:
sudo pvdisplay
--- Physical volume ---
PV Name /dev/sda3
VG Name LVG
PV Size 28.00 GiB / not usable 0
Allocatable yes
PE Size 4.00 MiB
Total PE 7167
Free PE 1023
Allocated PE 6144
PV UUID f7NB0i-fOy2-MQjF-bz69-KIpE-zVpr-UTH29e
The important bits of info here are the PV Name and VG Name for our existing configuration.
- Use fdisk -l which should show something similar to this (however I abbreviated it to show just the important parts):
sudo fdisk -l
Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 2101247 2097152 1G Linux filesystem
/dev/sda3 2101248 60821503 58720256 28G Linux filesystem
Disk /dev/sdb: 10 GiB, 10884901888 bytes, 21165824 sectors
Disk /dev/sdc: 15 GiB, 15884901888 bytes, 28165824 sectors
The important bits of info here are the device paths for the new drives which I highlighted in red.
Prepare the first drive (/dev/sdb) to be used by the LVM
Type the following:
Code: Select all
sudo fdisk /dev/sdb
n (Create New Partition)
p (Primary Partition)
1 (Partition Number)
{ENTER} (use default for first cylinder)
{ENTER} (use default for last cylinder)
t (Change partition type)
8e (Set to Linux LVM)
p (Preview how the drive will look)
w (Write changes)
Prepare the second drive (/dev/sdc) to be used by the LVM
Do the exact same steps as above but start with
sudo fdisk /dev/sdc
Create physical volumes using the new drives
If we type
sudo fdisk -l, we now see /dev/sdb1 and /dev/sdc1 which are Linux LVM partitions.
Type the following to create physical volumes:
Code: Select all
sudo pvcreate /dev/sdb1
sudo pvcreate /dev/sdc1
Now add the physical volumes to the volume group (LVG) by typing the following:
Code: Select all
sudo vgextend LVG /dev/sdb1
sudo vgextend LVG /dev/sdc1
You can run the
sudo vgdisplay command to see that the "Free PE / Size" has increased.
Now that the space of both drives have been added to the logical volume group called LVG, we can now allocate that space to grow the logical volumes and then the file systems as needed.
Shutdown and power off the server by typing
sudo shutdown -P now
In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like
STEP 3 and description of
Ubuntu Server 22.04 LTS, Storage space adjusted, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> STEP 3 --> You are here)