The Jupyter Notebook Service

On This Page

Overview

Jupyter is a project for development of open-source software, standards, and services for interactive computation across multiple programming languages. The Platform comes preinstalled with the JupyterLab web-based user interface, including Jupyter Notebook and JupyterLab Terminals, which are available via a Jupyter Notebook user application service.

Jupyter Notebook is an open-source web application that allows users to create and share documents that contain live code, equations, visualizations, and narrative text; it's currently the leading industry tool for data exploration and training. Jupyter Notebook supports integration with all key analytics services, enabling users to perform all stages of the data science flow, from data collection to production, from a single interface using various APIs and tools to concurrently access the same data without having to move the data. Your Jupyter Notebook code can execute Spark jobs (for example, using Spark DataFrames); run SQL queries using Trino; define, deploy, and trigger Nuclio serverless functions; send web-API requests; use pandas and V3IO Frames DataFrames; use the Dask library to scale the use of pandas DataFrames; and more. You can use Jupyter terminals to execute shell commands, such as file-system and installation commands.

Jupyter Image and Installing Packages

The platform allows new system packages installation within the Jupyter image by running the apt-get command. However, those packages are not persistently stored on the V3IO and exist only within the container, meaning that they are deleted upon restart of the Jupyter service. If persistence for those packages is needed, the installation commands for those packages can be added to startup hook script, which runs before Jupyter is launched. You can also add jupyter extensions or other modifications. The jupyter startup script is in /User/.igz/startup-hook.sh. If it exists, it is executed just before Jupyter is launched (after all other launch steps and configurations). Any failure of the script is ignored in order to avoid unnecessary Jupyter downtime.

You can use Conda and pip, which are available as part of the Jupyter Notebook service, to easily install Python packages such as Dask and machine-learning and computation packages. Your choice of pip or conda depends on your needs; the platform provides you with a few options.

Jupyter comes with a few prebaked conda environments:

  • base: */conda
  • jupyter: /conda/envs/jupyter
  • mlrun-base: /conda/envs/mlrun-base
  • mlrun-extended: /conda/envs/mlrun-extended

The prebaked environments are consistent for pip, but are not persistent for Conda. If you are only using pip, you can use the prebaked Conda environments. If you need to use Conda, create or clone an environment. When you create or clone an environment, it is saved to the V3IO fuse mount by default (/User/.conda/envs/<env name>) and is persistent for both pip and Conda. Since MLRun is pip-based, it's recommended to use pip whenever possible to avoid dependency-conflicts.

See full details and examples in Creating Python Virtual Environments with Conda.

Resources

The platform provides tutorial Jupyter notebooks with code examples ranging from getting-started examples to full end-to-end demo applications, including detailed documentation. Start out by reading the introductory welcome.ipynb notebook (available also as a Markdown README.md file), which is similar to the introduction on the documentation site. Then, proceed to the Quick start tutorial.

Configuring the Service

Pod Priority

Pods (services, or jobs created by those services) can have priorities, which indicate the relative importance of one pod to the other pods on the node. The priority is used for scheduling: a lower priority pod can be evicted to allow scheduling of a higher priority pod. Pod priority is relevant for all pods created by the service.
Eviction uses these values to determine what to evict with conjunction to the pods priority. See more details in Interactions between Pod priority and quality of service.

Pod priority is specified through Priority classes, which map to a priority value. The priority values are: High, Medium, Low. The default is Medium.

Configure the default User functions default priority for a service, which is applied to the service itself or to all subsequently created user-jobs in the service's Common Parameters tab, User jobs defaults section, Priority class drop-down list.

Jupyter Flavors

You can set the custom Flavor parameter of the Jupyter Notebook service to one of the following flavors to install a matching Jupyter Docker image:

Jupyter Full Stack
A full version of Jupyter for execution over central processing units (CPUs).
Jupyter Full Stack with GPU
A full version of Jupyter for execution over graphics processing units (GPUs). This flavor is available only in environments with GPUs and is sometimes referred to in the documentation as the Jupyter "GPU flavor". For more information about the platform's GPU support, see Running Applications over GPUs.

This parameter is in the Custom Parameters tab of the service.

Jupyter Service and GPU Resources

In environments with GPUs, you can use the common Resources | GPU | Limit parameter of the Jupyter Notebook service to guarantee the configured number of GPUs for use by each service replica.

A Jupyter service that is using GPU should be configured with the scale to zero option to automatically free up resources, including GPUs, when the service becomes idle. Check the Enabled check box for the common Scale to zero parameter.

When configuring your Jupyter Notebook service, take the following into account: While the Jupyter Notebook service is enabled and not scaled to zero, it monopolizes the configured amount of GPUs even when the GPUs aren't in use.

  • RAPIDS applications use the GPUs that were allocated for the Jupyter Notebook service from which the code is executed.
  • Horovod applications allocate GPUs dynamically and don't use the GPUs of the parent Jupyter Notebook service. Therefore, on systems with limited GPU resources you might need to reduce the amount of GPU resources allocated to the Jupyter Notebook service or set it to zero to successfully run the Horovod code over GPUs.

Associate the Jupyter Service with a Trino Service

If you have multiple Trino services in the same cluster, you can associate the Jupyter service with a specific Trino service. See The Trino Service (formerly Presto).
In the Custom Parameters tab of the Jupyter service, select the service from the Trino drop-down list, or press Create new.. to open the Create a new service page.

Environment Variables

You can add Environment variables to a Jupyter Notebook service in the Custom Parameters tab of the service.

Persistent Volume Claims (PVCs)

You can connect an existing cluster Persistent Volume Claims (PVCs) to a Jupyter Notebook service in the Custom Parameters tab of the service.

SSH

You can configure secure connectivity to the Jupyter service using SSH, which enables debugging from remote IDEs such as PyCharm and VSCode. Enable SSH and configure the port in the Custom Parameters tab. When SSH is configured, you can get the authentication key from the service menu User SSH option.
The SSH port must be in the range of 30000–32767, and the SSH connection must be done with user "iguazio" regardless of the identity of the running user of the Jupyter service.

Node Selection

You can assign jobs and functions to a specific node or a node group, to manage your resources, and to differentiate between processes and their respective nodes. A typical example is a workflow that you want to only run on dedicated servers.

When specified, the service or the pods of a function can only run on nodes whose labels match the node selector entries configured for the service. You can also specify labels that were assigned to app nodes by an iguazio IT Admin user. See Setting Labels on App Nodes.

Configure the key-value node selector pairs in the Custom Parameters tab of the service.

If node selection for the service is not specified, the selection criteria defaults to the Kubernetes default behavior, and jobs run on a random node.

Node selection is relevant for all cloud services.

See more about Kubernetes nodeSelector.

Custom Jupyter Image

You can specify a custom Jupyter image, to optimize the Jupyter notebook runtime for your application needs.

  1. Store the image in an available docker registry. You can also store a script in this location and it will be run as part of the initialization steps
  2. Select Custom image from the Flavor drop-down list, then specify the:
    • Docker registry
    • Image name

Python Machine-Learning and Scientific-Computation Packages

The Jupyter Notebook service pre-deploys the pandas open-source Python library for high-performance data processing using structured DataFrames ("pandas DataFrames"). The platform also pre-deploys other Python packages that utilize pandas DataFrames, such the Dask parallel-computation library or Iguazio's V3IO Python SDK and V3IO Frames libraries.

You can easily install additional Python machine-learning (ML) and scientific-computation packages — such as TensorFlow, Keras, scikit-learn, PyTorch, Pyplot, and NumPy. The platform's architecture was designed to deploy computation to one or more CPU or GPU with a single Python API.

For example, you can install the TensorFlow open-source library for numerical computation using data-flow graphs. You can use TensorFlow to train a logistic regression model for prediction or a deep-learning model, and then deploy the same model in production over the same platform instance as part of your operational pipeline. The data science and training portion can be developed using recent field data, while the development-to-production workflow is automated and time to insights is significantly reduced. All the required functionality is available on a single platform with enterprise-grade security and a fine-grained access policy, providing you with visibility into the data based on the organizational needs of each team. The following Python code sample demonstrates the simplicity of using the platform to train a TensorFlow model and evaluate the quality of the model's predictions:

model.train(
    input_fn=lambda: input_fn(train_data, num_epochs, True, batch_size))
results = model.evaluate(input_fn=lambda: input_fn(
    test_data, 1, False, batch_size))
for key in sorted(results):
    print('%s: %s' % (key, results[key]))

The image-classification-with-distributed-training demo demonstrates how to build an image recognition and classification ML model and perform distributed model training by using Horovod, Keras, TensorFlow, and Nuclio.

See Also