How to install and configure Proxmox 6.2

Post Reply
User avatar
LHammonds
Site Admin
Site Admin
Posts: 913
Joined: Fri Jul 31, 2009 6:27 pm
Are you a filthy spam bot?: No
Location: Behind You
Contact:

How to install and configure Proxmox 6.2

Post: # 840Post LHammonds »

------------- WORK-IN-PROGRESS -------------

Greetings and salutations,

I am learning about Proxmox and will keep my notes on how I install and configure Proxmox on this thread....updating as I go. There are a thousand different ways / scenarios to configure Proxmox. I will try to cover a few different scenarios but I am limited with the hardware that is available.

These are the general goals I am looking to accomplish:
  • Setup a standalone host
  • Add additional hosts and convert standalone hosts into cluster nodes
  • Configure network cards for LAN / WAN / Isolated
  • Configure local storage to act as shared storage to hold VMs and ISO images
  • Migrate VMs between different hosts (like vmware vmotion)
  • Migrate VM storage between different locations (like vmware vmotion)
  • Create Ubuntu template VM for deployment to new VMs
  • Create/Delete/Restore snapshots for VMs
  • Create backups of VMs using snapshots
  • Create users at different levels (Full-Admin, VM-Admin, Storage-Admin, Helpdesk)
Sub-projects which might spin off this setup are:
  • pfSense firewall
  • pi-hole DNS
  • Plex media server
  • OpenVPN server
Tools utilized in this process
Helpful links

The list below are sources of information that helped me configure this system as well as some places that might be helpful to me later on as this process continues.
Assumptions

This documentation will need to make use of some very-specific information that will most-likely be different for each person / location. And as such, I will note some of these in this section. They will be highlighted in red throughout the document as a reminder that you should plug-in your own value rather than actually using my "place-holder" value.

Under no circumstance should you use the actual values I list below. They are place-holders for the real thing. This is just a checklist template you need to have answered before you start the install process.

The RED variables below are used throughout this document, you need to substitute it for what your company uses. Use the list below as a template you need to have answered before you continue.
  • LAN Network will be 192.168.107.0/24 (Class C, 255.255.255.0 subnet mask)
  • WAN Network will be 10.10.10.0/24 (Class C, 255.255.255.0 subnet mask)
  • Server #1 name: srv-pve1, IP address: 192.168.107.201
  • Server #2 name: srv-pve2, IP address: 192.168.107.202
  • Server #3 name: srv-pve3, IP address: 192.168.107.203
  • Admin ID: root
  • Admin Password: myadminpass
I also assume the reader knows how to use the VI editor. If not, you will need to beef up your skill set or use a different editor in place of it.

User avatar
LHammonds
Site Admin
Site Admin
Posts: 913
Joined: Fri Jul 31, 2009 6:27 pm
Are you a filthy spam bot?: No
Location: Behind You
Contact:

Preparation

Post: # 841Post LHammonds »

For this project, I am taking 3 low-end run-of-the-mill workstations and turning them into a Proxmox cluster. Here are their details:
  1. Dell Optiplex 3020, Intel Core i3-4130 CPU @ 3.40GHz with 2 cores, 8GB RAM, 450 GB HD
  2. Dell Optiplex 3020, Intel Core i5-4590 CPU @ 3.30GHz with 4 cores, 8GB RAM, 450 GB HD
  3. Dell Optiplex 3020, Intel Core i3-4130 CPU @ 3.40GHz with 2 cores, 8GB RAM, 450 GB HD
Since there is only a single, slow disk in each system, RAID and ZFS are not going to be used. We will have to rely on frequent external backups and documentation for when a failure happens and we need to rebuild / restore.

User avatar
LHammonds
Site Admin
Site Admin
Posts: 913
Joined: Fri Jul 31, 2009 6:27 pm
Are you a filthy spam bot?: No
Location: Behind You
Contact:

Installation

Post: # 842Post LHammonds »

Installation

The ISO image will be burned onto a DVD-ROM disc and the machines will be installed via the disc one at a time. You can also do the same for placing the image on USB stick. Just make sure the BIOS boot order checks for your install media device before trying to boot the internal drive.

These steps will be repeated for each host machine but the hostname and IP will be different for each one.
  1. Insert CDROM and power on machine.
  2. Welcome - Press ENTER to accept default of "Install Proxmox VE"
  3. EULA - Click the [ I agree ] button
  4. PVE - Click the [ Next ] button to accept default of /dev/sda
  5. Location and Time Zone - Configure as needed and click the [ Next ] button
  6. Credentials - Set the password, email and click the [ Next ] button
  7. Network Config - Set a unique hostname such as srv-pve1, set the IP info such as 192.168.107.201 and click the [ Next ] button
  8. Summary - Click the [ Install ] button
  9. Installation Successful - Click the [ Reboot ] button and eject CDROM

User avatar
LHammonds
Site Admin
Site Admin
Posts: 913
Joined: Fri Jul 31, 2009 6:27 pm
Are you a filthy spam bot?: No
Location: Behind You
Contact:

Time Synchronization

Post: # 843Post LHammonds »

Time Zone

Before going forward, make sure the timezone is set correctly on each node. To show the current timezone, issue this command on each node:

Code: Select all

timedatectl status
Local time: Thu 2020-05-14 11:16:04 CDT Universal time: Thu 2020-05-14 16:16:04 UTC RTC time: Thu 2020-05-14 16:16:05 Time zone: America/Chicago (CDT, -0500) System clock synchronized: yes NTP service: inactive RTC in local TZ: no
If the time zone is not correct, you can get a list of timezones you can use (in this example, I am reducing the output to just the American zones)

Code: Select all

timedatectl list-timezones | grep America
To set the timezone to "America/Chicago" you could type the following (NOTE: this change is immediate. No services need to be restarted):

Code: Select all

timedatectl set-timezone America/Chicago
Network Time Protocol (NTP)

It is important for all nodes in a cluster to have their time clocks synchronized.

Best practice is to configure at least one device/server to be synchronized with a public time service which will become the local time master. All other devices/servers should be synchronized to the local time master.

In this scenario, we are going to assume nothing else on the network exists and configure the 1st node of the cluster to be the local time master and have all other nodes synchronize their time with it.

The NTP utilities are not installed by default. Use the web interface "shell" or SSH into each node and install NTP.

Code: Select all

apt -y install ntp ntpstat
This will install a status utility and the NTP daemon. The default configuration pulls time from a pool of public Debian NTP servers. You can use whatever external time servers you want (such as NIST.gov servers).

Local Time Master
  1. For srv-pve1, use "shell" web interface or SSH into the console.
  2. Edit the NTP configuration file:

    Code: Select all

    vi /etc/ntp.conf
  3. The default list of NTP servers looks like this:

    Code: Select all

    pool 0.debian.pool.ntp.org iburst
    pool 1.debian.pool.ntp.org iburst
    pool 2.debian.pool.ntp.org iburst
    pool 3.debian.pool.ntp.org iburst
    
  4. If you decide to make any changes to the external pool of NTP servers, save the file and restart the NTP daemon:

    Code: Select all

    systemctl restart ntp
Local Time Slaves
  1. For all other nodes, use "shell" web interface or SSH into the console.
  2. Edit the NTP configuration file:

    Code: Select all

    vi /etc/ntp.conf
  3. Find and comment out or delete the following lines:

    Code: Select all

    pool 0.debian.pool.ntp.org iburst
    pool 1.debian.pool.ntp.org iburst
    pool 2.debian.pool.ntp.org iburst
    pool 3.debian.pool.ntp.org iburst
  4. Add the following line:

    Code: Select all

    server 192.168.50.201 iburst prefer
    
  5. Find and comment out or delete the following lines:

    Code: Select all

    restrict -4 default kod notrap nomodify nopeer noquery limited
    restrict -6 default kod notrap nomodify nopeer noquery limited
    
  6. Add the following line because we do not want this machine to act like a time master:

    Code: Select all

    restrict default ignore
  7. Save the file and restart the NTP daemon:

    Code: Select all

    systemctl restart ntp
Check Status

Run this command on each of the nodes:

Code: Select all

ntpq -pn
The time master should look something like this:

Code: Select all

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 0.debian.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000
 1.debian.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000
 2.debian.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000
 3.debian.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000
+5.196.192.58    193.190.230.37   2 u  222  128  372  143.401  -53.148  55.819
+45.32.4.67      200.98.196.212   2 u  128  128  177   68.430  -52.665  43.185
+185.90.160.100  237.17.204.95    2 u   27  128  275  165.976  -39.345  25.182
+45.76.244.193   216.239.35.8     2 u  497  128  134   81.653  -48.892  45.855
+72.87.88.202    129.6.15.29      2 u  198  128  372   89.442  -43.429  40.731
+87.253.148.92   193.190.230.65   2 u  289  128  374  143.307  -53.842  52.550
#78.46.60.40     124.216.164.14   2 u    1  128  315  180.961  -33.639  62.943
#217.147.208.1   194.242.34.149   2 u  154  128  176  166.253  -57.242  48.163
+37.59.63.125    193.67.79.202    2 u   53  128  377  144.802  -50.648  48.429
+95.216.136.148  195.210.189.106  2 u  180  128  376  183.623  -44.188   8.568
#178.172.163.14  192.168.125.80   2 u   94  128  375  206.377  -37.268  52.865
*85.199.214.98   .GPS.            1 u  103   64  376  132.503  -52.036  53.075
+194.112.182.172 237.17.204.95    2 u  219  128  136  173.273  -44.952  55.640
You can see a status on the far left:
  • * --> Asterisk shows the preferred server that was selected to synchronize time.
  • + --> Plus sign shows time servers that are potential candidates for becoming the preferred server to synchronize time.
  • - --> Minus sign shows time servers that are ignored due to not currently being optimal candidate.
All other nodes should look something like this:

Code: Select all

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*192.168.50.201  85.199.214.98    2 u   22   64    1    0.193   62.162   0.000
You can also get a summary report with this command:

Code: Select all

ntpstat
Status result on pve1.lan:

Code: Select all

synchronised to NTP server (85.199.214.98) at stratum 2
   time correct to within 136 ms
   polling server every 128 s
Status result on all other nodes:

Code: Select all

synchronised to NTP server (192.168.50.201) at stratum 3
   time correct to within 141 ms
   polling server every 64 s

User avatar
LHammonds
Site Admin
Site Admin
Posts: 913
Joined: Fri Jul 31, 2009 6:27 pm
Are you a filthy spam bot?: No
Location: Behind You
Contact:

Management Tasks

Post: # 845Post LHammonds »

Subscription Popup

Code: Select all

Error message - "You do not have a valid subscription for this server"
Every time you login, you will be greeted with that message. You can safely click OK and continue on. If you purchase a support subscription, that message goes away but the system will function just the same with or without a subscription.

If you have no plans to buy a subscription (such as just doing a trial evaluation), you can edit the javascript code to prevent this message from displaying.

As of 6.2, the following command will edit the file and ensure it will not display the next time you reboot. On each host, click "Shell" which will drop you to a command prompt as root on the server console. Run the following code to disable the popup message:

Code: Select all

sed -i.bak "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service
Cluster Management

NOTE: Do NOT attempt to rename the cluster or a node in the cluster. If you need to rename a host, do so before joining the cluster by editing the /etc/hostname and /etc/hosts and reboot.

It is easy to create a cluster and add hosts to become nodes in the cluster. There is not an easy "remove" button to reverse the process (yet).

Create the cluster:
  1. On the 1st host (srv-pve1), select Datacenter, Cluster, [ Create Cluster ] button. Give the cluster any name you want such as Cluster1
  2. Once the cluster is created, click the [ Join Information ] button and then the [ Copy Information ] button
Join the other hosts to the cluster:
  1. Make sure the next host you are about to join is uniquely named how you want it (e.g. srv-pve2)
  2. Select the next host, click Cluster, [ Join Cluster ] button.
  3. Press CTRL+V to paste the info into the box and type the root password for the cluster and click the [ Join ] button.
  4. You might lose connectivity on the host joining the cluster. Look at the cluster to see if there is an entry in the "TASKS" section at the bottom where is says the join is OK. It should be fairly quick.
  5. You can close the browser tab for the joining host since you can now refresh the cluster tab and control that host as a cluster node from there.
  6. Repeat these steps for every host you want to add.
Network Management

The host should have one network card configured which we will refer to as the LAN (Local Area Network). We need one for the WAN (Wide Area Network / Internet) and an isolated / No Access card.

Repeat these steps for each node in the cluster.
  1. Select the host, click System, Network
  2. If you have one physical network card, you will have two entries. One is the name of the physical card (e.g. enp2s0) and the type is "Network Device." The other is a "Linux Bridge" called vmbr0 which is linked to the name of the physical network card (e.g. enp2s0). Edit the details of vmbr0 and set the comment to say "LAN"
  3. Create a new Linux Bridge called vmbr1 with a comment of "NoAccess" and no other setting configured.
VM Storage Management

On every host, run these commands:

Code: Select all

apt install nfs-kernel-server
mkdir /srv/vm
Now we need to present the share via NFS.

Code: Select all

vi /etc/exports
Add the following line:

Code: Select all

/srv/vm        *(rw,sync,no_root_squash,no_subtree_check)
Restart the NFS service for the change to take effect:

Code: Select all

systemctl restart nfs-kernel-server
In Proxmox, select Datacenter, Storage and add NFS for each host that you have:

ID=pve1-nfs, Server=192.168.107.201, Export=/srv/vm, Content=Disk Image
ID=pve2-nfs, Server=192.168.107.202, Export=/srv/vm, Content=Disk Image
ID=pve3-nfs, Server=192.168.107.203, Export=/srv/vm, Content=Disk Image

ISO Storage Management

We will designate the 1st host to hold all our ISO images. This will make managing the files easier and avoid duplication and wasted space.

Code: Select all

mkdir /srv/iso
vi /etc/exports
Add this line:

Code: Select all

/srv/iso        *(rw,sync,no_root_squash,no_subtree_check)
Restart the NFS service for the change to take effect:

Code: Select all

systemctl restart nfs-kernel-server
In Proxmox, select Datacenter, Storage and add an NFS share as follows:

ID=pve1-iso, Server=192.168.107.201, Export=/srv/iso, Content=ISO Image

Use WinSCP or some other utility to transfer ISO images to pve1 in this folder: /srv/iso/template/iso/

Full VM Migration
Migrate one virtual machine from one host to another (VM + Storage)
Partial VM Migration
Migrate just the virtual machine from one host to another but leave the storage in place.
NOTE: This requires the storage to be accessible to both hosts.
Storage Migration
Migrate only storage disks from one storage location to another.
NOTE: This requires both storage locations to be accessible to the host.
Remote Desktop Access

Windows servers and clients have built-in RDP that can be enabled and used.

Linux servers do not have a GUI and as such, only need to connect via SSH.

Linux desktops can use a variety of remote access software. We will be using TigerVNC to access the desktops in the cluster remotely. X2Go was a close second choice.
Certificate Management

In this OPTIONAL step, we are going to fix the SSL warnings by creating a local certificate authority (CA) server, create cert, import onto PC as trusted root CA

NOTE: This might be a completely separate tutorial.

Post Reply