Iguazio empowers NVIDIA customers to maximize GPU resource efficiency and accelerate their AI workflows
Iguazio helps NVIDIA customers maximize the value of their GPU investments by making more efficient use of GPU resources (GPU sharing), saving costs, reducing infrastructure complexities and accelerating the time to impact of AI projects.
Iguazio provides an end-to-end MLOps environment for both training and inference. On the training side, users can easily run experiments or deploy serving models with GPU resources attached, along with full resource control. They can assign GPUs to various engines that are used for training (like Spark or Horovod) or to a Jupyter notebook with a simple interface. One feature that is particularly loved by our customers is the scale to zero option: when a Jupyter notebook with assigned GPUs is idle for a certain amount of time, GPU resources are automatically freed up.
On the serving side, the platform’s serverless function framework (Nuclio) is the only serverless function that has the option to run with GPUs as a resource. Nuclio scales up and down, allocating and releasing GPU resources (including scale to zero) based on the actual workload.
The platform also comes with out-of-the-box GPU monitoring reports on both the cluster and the application level which simplifies application troubleshooting. The Iguazio platform helps enterprises across verticals monitor their entire stack from low-level GPU usage, through ML jobs and data, to end-to-end pipelines—simplifying the operations of GPU-based systems, accelerating workflows and cutting costs.
Iguazio was one of the first NVIDIA partners to join the NVIDIA DGX-Ready Software program, which delivers proven enterprise-grade solutions that increase data science productivity, accelerate AI workflows, and improve accessibility and utilization of AI infrastructure using NVIDIA DGX systems, including the new NVIDIA DGX A100.