How to leverage multiple MLOps tools to streamline model serving for complex real-time use cases
Meet Us at ODSC West in San Francisco from Oct 31-Nov 1
Distributed ingestion is a great way to increase scalability for ML use cases with large datasets. But like any ML component, integrating and maintaining another tool introduces engineering complexity. Here's how to simplify it.
AI/ML projects can run up big bills on compute. With Spark Operator, you can take advantage of spot instances and dynamic executor allocation, which can deliver big savings. Here's how to very simply set it up in MLRun.
In this article, we will walk you through steps to run a Jenkins server in docker and deploy the MLRun project using Jenkins pipeline.
Here's how to turn your existing model training code into an MLRun job and get the benefit of all the experiment tracking, plus more.