Meet Us at ODSC West in San Francisco from Oct 31-Nov 1

Can MLRun utilize GPUs when running ML jobs?

MLRun supports GPUs. When launching jobs or real-time functions like serving and data processing, you can turn on the GPUs. You can also specify how many CPU, GPU or CPU ratios are required.

When building underlying serverless components, MLRun knows how to automatically configure GPUs for optimal usage. For example, when running a training job and requesting a GPU, MLRun will configure CUDA drivers automatically, enabling seamless transition from using a GPU to not using a GPU with a single flag.

In this mask detection demo, you can see how to seamlessly move from distributed work on multiple GPUs to running locally without the GPU with the flip of a parameter.

Need help?

Contact our team of experts or ask a question in the community.

Have a question?

Submit your questions on machine learning and data science to get answers from out team of data scientists, ML engineers and IT leaders.