Data Science Automation and AI Services
Overview
The platform has pre-deployed services for data science and (gen) AI applications automation and tracking:
MLRun
MLRun is Iguazio's open-source AI orchestration framework, which offers an integrative approach to managing machine-learning pipelines from early development through model development to full pipeline deployment in production. MLRun offers a convenient abstraction layer to a wide variety of technology stacks while empowering data engineers and data scientists to define the feature and models. MLRun also integrates seamlessly with other platform services, such as Kubeflow Pipelines, Nuclio, and V3IO Frames.
The MLRun server is provided as a default (pre-deployed) shared single-instance tenant-wide platform service (mlrun
), including a graphical user interface ("the MLRun dashboard" or "the MLRun UI"), which is integrated as part of the
The MLRun client API is available via the MLRun Python package (mlrun
), including a command-line interface (mlrun
).
You can easily install and update this package from the Jupyter Notebook service by using the
The MLRun library features a generic and simplified mechanism for helping data scientists and developers describe and run scalable ML and other data science tasks in various runtime environments while automatically tracking and recording execution code, metadata, inputs, and outputs. The capability to track and view current and historical ML experiments along with the metadata that is associated with each experiment is critical for comparing different runs, and eventually helps to determine the best model and configuration for production deployment.
MLRun is runtime and platform independent, providing a flexible and portable development experience. It allows you to develop functions for any data science task from your preferred environment, such as a local IDE or a web notebook; execute and track the execution from the code or using the MLRun CLI; and then integrate your functions into an automated workflow pipeline (such as Kubeflow Pipelines) and execute and track the same code on a larger cluster with scale-out containers or functions.
For detailed MLRun information and examples, including an API reference, see the MLRun documentation, which is available also in the AI and Gen AI Services section of the platform documentation. See also the MLRun restrictions in the platform's Software Specifications and Restrictions.
These demos and tutorials are pre-deployed in each user's
Configuring the MLRun Service
Pod Priority for User Jobs
Pods (services, or jobs created by those services) can have priorities, which indicate the relative importance of one pod to the other pods on the node. The priority is used for
scheduling: a lower priority pod can be evicted to allow scheduling of a higher priority pod. Pod priority is relevant for all pods created
by the service.
Eviction uses these values to determine what to evict with conjunction to the pods priority. See more details in Interactions between Pod priority and quality of service.
Pod priority is specified through Priority classes, which map to a priority value. The priority values are: High, Medium, Low. The default is Medium.
Configure the default User functions default priority for a service, which is applied to the service itself or to all subsequently created user-jobs in the service's Common Parameters tab, User jobs defaults section, Priority class drop-down list.
The priority applies to the jobs created by MLRun.
Resources for User Jobs
When you create a pod in an MLRun job, the pod has default CPU and memory limits, and a default number of workers (2). When the job runs, it can consume
resources up to the limits defined. The CPU and memory configurations are applied to each replica.
You can configure the default limits and the number of workers at the service level.
When creating a service, set the
When creating a job, you can overwrite the default overwrite the default
You cannot overwrite the number of workers defined at the service level .
See more about Resource Management for Pods and Containers.
Service Account
You can add a custom service account to an MLRun service. The annotations associated with the account are persistent when restarting the service or during upgrades. The service account must already exist on the cluster. (If you do not specify a name, the service uses a default account.)
Function Hub
The function hub is configured, by default, to the MLRun function hub: https://mlrun.github.io/marketplace. You can change it to a custom function hub. See more details in the MLRun documentation: Custom function hub.
Node Selection
You can assign jobs and functions to a specific node or a node group, to manage your resources, and to differentiate between processes and their respective nodes. A typical example is a workflow that you want to only run on dedicated servers.
When specified, the service or the pods of a function can only run on nodes whose labels match the node selector entries configured for the service. You can also specify labels that were assigned to app nodes by an iguazio IT Admin user. See Setting Labels on App Nodes.
Configure the key-value node selector pairs in the
If node selection for the service is not specified, the selection criteria defaults to the Kubernetes default behavior, and jobs run on a random node.
Node selection is relevant for all cloud services.
See more about Kubernetes nodeSelector.
You can also configure the node selection for individual MLRun jobs by going to Platform dashboard | Projects | New Job | Resources | Node selector, and adding or removing Key:Value pairs.
Modify the priority for an ML function by pressing ML functions, then of the function, Edit | Resources | Pods Priority drop-down list.
External System Docker Registry
By default, the MLRun service on the default tenant is configured to work with a predefined, tenant-wide Docker Registry service, which uses a pre-deployed, local, on-cluster Docker Registry. You can change the configuration of the MLRun service to work with an off-cluster Custom User Docker Registry service. See Docker Registry.
Kubeflow Pipelines
Google Kubeflow Pipelines is an open-source framework for building and deploying portable, scalable ML workflows based on Docker containers. For detailed information, see the Kubeflow Pipelines documentation.
Kubeflow Pipelines is provided as a default (pre-deployed) shared single-instance tenant-wide platform service (pipelines
), which can be used to create and run ML pipeline experiments.
The pipeline artifacts are stored in a