Deploying the Platform Nodes (PVE)

On This Page

Overview

To install (deploy) an instance of the platform on a Proxmox VE (PVE) cluster, you need to deploy virtual machines (VMs) that will serve as the platform's data and application nodes. The nodes can reside on the same PVE hypervisor host machine ("PVE host") or on different PVE hosts, as long as every node VM meets the required hardware specifications (see the prerequisites in this guide and the On-Prem Deployment Specifications. This guide outlines how to deploy platform VMs from VMA GZ files (images archives).

Prerequisites

Before you begin, ensure that you have the following:

  1. A PVE platform virtualization package with VMA GZ files for each of the platform nodes, received from Iguazio.

  2. Administrative access to a platform PVE cluster with the required networks configuration (see Configuring Virtual Networking (PVE)).

  3. PVE-host data stores with a minimum of 400 GB available storage for each of the platform nodes (to be used for running the nodes' VM boot-disk images).

  4. Sufficient dedicated physical resources on the PVE hosts to allow running the platform's node VMs without over-provisioning.

  5. The following BIOS settings are configured on the PVE hosts:

    • Hyper Threading — disabled
    • Advanced | CPU Configuration | Intel Virtualization Technology — enabled
    • Chipset Configuration | North Bridge | IIO Configuration | Intel VT for Directed I/O (VT-d) — enabled

Renaming the Backup Files (PVE 6.2.1–6.2-9)

Note
This step is required only for PVE versions 6.2-1—6.2-9. If you're using PVE version 6.2-10 or newer, skip to the next step.

PVE version 6.2-1 introduced a strict restriction to the names of VM-backup (virtualization) image files, which requires including the image-creation date in the file name. This requirement was canceled in PVE version 6.2-10. The default names of the PVE virtualization files provided by Iguazio don't include the creation date. Therefore, if you're using PVE version 6.2-1–6.2-9, before deploying the VMs, run the following commands to rename the backup image files to add image-creation dates:

mv vzdump-qemu-data-node-8.cores-122G.ram-400G.ssd.vma.gz vzdump-qemu-108-2020_05_20-10_00_00.vma.gz
mv vzdump-qemu-data-node-16.cores-244G.ram-400G.ssd.vma.gz vzdump-qemu-116-2020_05_20-10_00_00.vma.gz
mv vzdump-qemu-app-node-8.cores-61G.ram-400G.ssd.vma.gz vzdump-qemu-208-2020_05_20-10_00_00.vma.gz
mv vzdump-qemu-app-node-16.cores-122G.ram-400G.ssd.vma.gz vzdump-qemu-216-2020_05_20-10_00_00.vma.gz

Copying the Backup Files to the PVE Hosts

Copy the image files from the provided PVE platform virtualization package to the /var/lib/vz/dump/ directory on all PVE hosts in the platform's PVE cluster. You can use the scp command to copy the files.

Deploying the VMs from the Backup Files

To import and deploy the platform's node VMs from the provided VMA GZ image files, execute the following procedure for each of the PVE hosts (hypervisors) in the platform's PVE cluster.

VM IDs
As part of each VM deployment, you assign a unique cluster-wide numeric ID to the VM. It's recommended that you begin with ID 101 and increment the ID for each deployed platform node VM, regardless of the node type and whether the cluster has a single or multiple PVE hosts. For example, for a single data node and a single application node, assign ID 101 to the data node and ID 102 to the application node.
  1. Open a command-line interface with a connection to the PVE host — either by selecting the PVE host in the PVE GUI and then selecting Shell, or by establishing an SSH connection to the PVE host. The commands in the next steps should be run from this command line.

  2. Run the following command:

    cd /var/lib/vz/dump/
    
  3. Deploy the platform's data and application nodes by repeating the following command for each node; it's recommended that you first deploy all the data-node VMs and then all the application-node VMs:

    qmrestore <data-node VM image file> <node ID> --unique
    

    For example, the following command deploys a data-node VM with ID 101:

    qmrestore vzdump-qemu-data-node-8.cores-122G.ram-400G.ssd.vma.gz 101 --unique
    
    Note
    • 101 is the VM's cluster-wide unique numeric ID. As explained, it's recommended to use sequential IDs starting with 101 — see the VM IDs note.
    • The --unique option generates a new unique random MAC addresses for the VM upon deployment (image import).
    • By default, the VM is placed on the default "local-lvm" data store, which is fine in most cases. If you want to place the VM on another data store, use the --storage option to specify the name of the desired data store — --storage <data-store name>.

Renaming the Deployed VMs (Optional)

In clusters with more than one node VM of each type (data and application) — as is the case for the platform's Operational Cluster configuration — it's recommended that you also rename each deployed node VM to a unique name. For example, for a cluster with three data nodes and three application nodes, you could name the data nodes "data-node-1", "data-node-2", and "data-node-3", and the application nodes "app-node-1", "app-node-2" and "app-node-3". You can rename the VMs from the PVE GUI by editing the VM name under <VM> | Options | Name.

See Also