On-Premises Hardware Specifications

On This Page

Overview

This document lists the hardware specifications for on-premises (“on-prem”) deployment of version 2.10.0 of the Iguazio Data Science Platform (“the platform”).

Capacity Calculations
All capacity calculations in the hardware specifications are performed using the base-10 (decimal) number system. For example, 1 TB = 1,000,000,000,000 bytes.

Hardware Configurations

The platform is available in two configurations, which differ in a variety of aspects, including the performance capacity, footprint, storage size, and scale capabilities:

Development Kit
A single data-node and single application-node cluster implementation. This configuration is designed mainly for evaluation trials and doesn’t include high availability (HA) or performance testing.
Operational Cluster
A scalable cluster implementation that is composed of multiple data and application nodes. This configuration was designed to achieve superior performance that enables real-time execution of analytics, machine-learning (ML), and artificial-intelligence (AI) applications in a production pipeline. The minimum requirement for HA support is three data nodes and three application nodes.

For both configurations, data nodes in on-prem deployments are always deployed on virtual machines (VMs) while application nodes can be deployed on either VMs or local machines (bare-metal).

VM Deployment Notes

When deploying on virtual machines, notify Iguazio’s support team whenever VMware Enhanced vMotion Compatibility (EVC) mode is enabled, as a low EVC level might disable required CPU features.

Data-Node Specifications

Data nodes in on-prem platform deployments use VMs and must fulfill the following hardware specification requirements:

VM Specifications

Component Specification
Memory 128 GB (small node) / 256 GB (large node)
Cores 8 (small node) / 16 (large node)
VM boot disk 400 GB (minimum) image (hosted on an enterprise-grade SSD-based data store)
Data disks 2, 4, 8, 16, 20, or 24 enterprise-grade or NVMe SSD data disks (drives) of 1 TB (minimum) each, which are mapped exclusively to the data-node VM using direct attach storage (DAS), such as raw device mapping (RDM).

Hypervisor Specifications

Component Specification
Network connectivity
  • Single port 1 Gb (minimum) Ethernet adapter for the management network
  • Dual port 10 Gb (minimum) Ethernet adapter for data-path and interconnect networks
Hypervisor VMware vSphere ESXi 6.5 or 6.7, or Proxmox VE 5.3, 5.4, or 6.2

Application-Node Specifications

In on-prem deployments you can select whether to deploy the application nodes on virtual machines (VMs) or on local machines (bare-metal), provided the same method is used on all nodes.

VM Application-Node Specifications

Application nodes in VM platform deployments must fulfill the following hardware specification requirements:

Note
For some components, the specification differentiates between small and large application nodes. Large application nodes provide greater processing capabilities. Note that you cannot mix the specifications of these two alternative configurations.

VM Specifications

Component Specification
Memory 64 GB (small node) / 128 GB (large node)
Cores 8 (small node) / 16 (large node)
VM boot disk 400 GB (minimum) image (hosted on an enterprise-grade SSD-based data store)

Hypervisor Specifications

Component Specification
Network connectivity
  • Single port 1 Gb (minimum) Ethernet adapter for the management network
  • Single port 10 Gb (minimum) Ethernet adapter for the data-path network
Hypervisor VMware vSphere ESXi 6.5 or 6.7, or Proxmox VE 5.3, 5.4, or 6.2

Bare-Metal Application-Node Specifications

Application nodes in bare-metal platform deployments are supplied by the customer and must fulfill the following hardware specification requirements:

Cores 8 (minimum)
Memory 64 GB of RAM (minimum)
OS boot disk 400 GB (minimum)
Network connectivity
  • Single port 1 Gb (minimum) Ethernet adapter for the management network
  • Dual port 10 Gb (minimum) Ethernet adapter for data-path and interconnect networks

See Also