Volume / Disk Management
Earlier, it was mentioned that the partition design needed to have some breathing room in each volume so that the file system inside can grow as needed. When the volumes were created during setup, the file systems were automatically expanded to fill the entire volume. We will now correct this by adding more "drives" to the system and then extend each logical volume to gain some breathing space.
Most logical volumes will be increased in size and then the file systems contained in them will be increased but not to the maximum amount.
This design will allow growth when needed and ensure that there will be time to add additional hard drives BEFORE they are needed which will keep the administrators from being stuck between a rock and a hard place! Nobody wants to lose a job because somebody did not estimate growth correctly or the budget did not allow for large capacity when the system first rolled out.
Here are the planned adjustments for each
logical volume:
root = 2 GB to 3 GB
swap = 2 GB (no change)
home = 0.2 GB to 1 GB
tmp = 0.5 GB to 2 GB
usr = 2.0 GB to 4 GB
var = 2.0 GB to 3 GB
srv = 0.2 GB to 2 GB
opt = 0.2 GB to 2 GB
bak = 0.5 GB to 4 GB
Here are the planned adjustments for each
file system:
root = 2.0 GB (no change)
swap = 2.0 GB (no change)
home = 0.2 GB to 0.5 GB
tmp = 0.5 GB to 1.0 GB
usr = 2.0 GB to 3.0 GB
var = 2.0 GB (no change)
srv = 0.2 GB to 1.0 GB
opt = 0.2 GB to 1.0 GB
bak = 0.5 GB to 2.0 GB
We started off with a 10 GB drive to hold these volumes but now need 22 GB. For this exercise, we will add two 12 GB drives to cover the additional storage needs. (NOTE: This was an arbitrary number in order to demonstrate how to add additional hard drives to the system)
Here is a graphical representation of what needs to be accomplished:
If we were to type
df -h right now, we should see something like this:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/LVG-root 1.9G 429M 1.4G 24% /
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 244K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
/dev/sda1 179M 47M 122M 28% /boot
/dev/mapper/LVG-home 187M 9.5M 168M 6% /home
/dev/mapper/LVG-tmp 473M 23M 427M 5% /tmp
/dev/mapper/LVG-usr 1.9G 460M 1.4G 26% /usr
/dev/mapper/LVG-var 1.9G 367M 1.5G 21% /var
/dev/mapper/LVG-srv 187M 9.5M 168M 6% /srv
/dev/mapper/LVG-opt 187M 9.5M 168M 6% /opt
/dev/mapper/LVG-bak 473M 23M 427M 5% /bak
Adding more space in VMware is easy. In this exercise, each drive will be added as a separate disk just as if we were to add a physical drive to a physical server.
- Shutdown and power off the server by typing shutdown -P now
- In the vSphere client, right-click the Virtual Machine and choose Edit Settings.
- On the hardware tab, click the Add button and select Hard Disk. Click Next, choose "Create a new virtual disk", click Next, set the size to 12 GB, click Next, Next, Finish.
- Add another 12 GB disk using the same steps above and click OK to close the settings and allow VMware to process the changes.
Collect information about the newly added drives.
- Start the server and connect using PuTTY.
- At the login prompt, login with your administrator account (administrator / myadminpass) and then temporarily grant yourself super user privilages by typing sudo su
- Type pvdisplay which should show something similar to this:
--- Physical volume ---
PV Name /dev/sda5
VG Name LVG
PV Size 9.81 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 2511
Free PE 228
Allocated PE 2283
PV UUID NkfC3i-ROqv-YuLZ-63VO-RTAU-l01p-suqi4O
The important bits of info here are the PV Name and VG Name for our existing configuration.
- Type fdisk -l which should show something similar to this (however I abbreviated it to show just the important parts):
Disk /dev/sda: 10.7 GB, 10737418240 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 391167 194560 83 Linux
/dev/sda2 393214 20969471 10288129 5 Extended
/dev/sda5 393216 20969471 10288128 8e Linux LVM
Disk /dev/sdb: 12.9 GB, 12884901888 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 12.9 GB, 12884901888 bytes
Disk /dev/sdc doesn't contain a valid partition table
The important bits of info here are the device paths for the new drives which I highlighted in red.
Prepare the first drive (/dev/sdb) to be used by the LVM
Type the following:
fdisk /dev/sdb
n (Create New Partition)
p (Primary Partition)
1 (Partition Number)
{ENTER} (use default for first cylinder)
{ENTER} (use default for last cylinder)
t (Change partition type)
8e (Set to Linux LVM)
p (Preview how the drive will look)
w (Write changes)
Prepare the second drive (/dev/sdc) to be used by the LVM
Do the exact same steps as above but start with
fdisk /dev/sdc
Create physical volumes using the new drives
If we type
fdisk -l, we now see /dev/sdb1 and /dev/sdc1 which are Linux LVM partitions.
Type the following to create physical volumes:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
Now add the physical volumes to the volume group (LVG) by typing the following:
vgextend LVG /dev/sdb1
vgextend LVG /dev/sdc1
Now that the space of both drives have been added to the logical volume group called LVG, we can now allocate that space to grow the logical volumes.
To get a list of volume paths to use in the next commands, type
lvscan to show your current volumes and their sizes.
Type the following to set the exact size of the volume by specifying the end-result size you want:
lvextend -L3G /dev/LVG/root
lvextend -L1G /dev/LVG/home
lvextend -L2G /dev/LVG/tmp
lvextend -L4G /dev/LVG/usr
lvextend -L3G /dev/LVG/var
lvextend -L2G /dev/LVG/srv
lvextend -L2G /dev/LVG/opt
lvextend -L4G /dev/LVG/bak
or you can grow each volume by the specified amount (the number after the plus sign):
lvextend -L+1G /dev/LVG/root
lvextend -L+0.8G /dev/LVG/home
lvextend -L+1.5G /dev/LVG/tmp
lvextend -L+2G /dev/LVG/usr
lvextend -L+1G /dev/LVG/var
lvextend -L+1.8G /dev/LVG/srv
lvextend -L+1.8G /dev/LVG/opt
lvextend -L+3.5G /dev/LVG/bak
To see the new sizes, type
lvscan
ACTIVE '/dev/LVG/root' [3.00 GiB] inherit
ACTIVE '/dev/LVG/swap' [1.86 GiB] inherit
ACTIVE '/dev/LVG/home' [1.00 GiB] inherit
ACTIVE '/dev/LVG/tmp' [2.00 GiB] inherit
ACTIVE '/dev/LVG/usr' [4.00 GiB] inherit
ACTIVE '/dev/LVG/var' [3.00 GiB] inherit
ACTIVE '/dev/LVG/srv' [2.00 GiB] inherit
ACTIVE '/dev/LVG/opt' [2.00 GiB] inherit
ACTIVE '/dev/LVG/bak' [4.00 GiB] inherit
The last thing to do now is the actual growth of the file systems. We want to grow the existing file systems but only to a certain amount so we do not take up all the space in the volume. We want room for growth in the future so we have time to order and install new drives when needed.
resize2fs /dev/LVG/home 500M
resize2fs /dev/LVG/tmp 1G
resize2fs /dev/LVG/srv 1G
resize2fs /dev/LVG/opt 1G
resize2fs /dev/LVG/usr 3G
resize2fs /dev/LVG/bak 2G
If we need to increase space in
/var at a later point, we can issue the following command without any downtime (we will automate this in a nifty script later):
resize2fs /dev/LVG/var 2560MB
We could continue to increase this particular file system all the way until we reach the limit of the volume which is 3 GB at the moment.
If we were to type
df -h right now, we should see something like this:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/LVG-root 1.9G 429M 1.4G 24% /
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 260K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
/dev/sda1 179M 47M 122M 28% /boot
/dev/mapper/LVG-home 488M 9.7M 454M 3% /home
/dev/mapper/LVG-tmp 1004M 23M 931M 3% /tmp
/dev/mapper/LVG-usr 3.0G 481M 2.3G 17% /usr
/dev/mapper/LVG-var 1.8G 267M 1.5G 16% /var
/dev/mapper/LVG-srv 989M 2.8M 940M 12% /srv
/dev/mapper/LVG-opt 996M 2.8M 940M 1% /opt
/dev/mapper/LVG-bak 2.0G 3.0M 1.9G 1% /bak
Remember,
df -h will tell you the size of the file system and
lvscan will tell you the size of the volumes where the file systems live in.
TIP: If you want to see everything in a specific block size, such as everything showing up in megabytes, you can use
df --block-size m
Shutdown and power off the server by typing
shutdown -P now
In VM menu, select VM --> Snapshot --> Take Snapshot. Give it a name like
STEP 3 and description of
Ubuntu Server 12.04 LTS, Storage space adjusted, Static IP: 192.168.107.2. The Snapshot Manager should now have a nice hierarchy of snapshots (STEP 1 --> STEP 2 --> STEP 3 --> You are here)