We've compiled the top sessions at ODSC West 2023 that we're most looking forward to, covering topics like security with LLMs, enterprise generative AI and more.
Join our webinar on Implementing a GenAI Smart Call Center Analysis App - Tuesday 27th of February 2024 - 9am PST / 12 noon EST / 6pm CET
Distributed ingestion is a great way to increase scalability for ML use cases with large datasets. But like any ML component, integrating and maintaining another tool introduces engineering complexity. Here's how to simplify it.
AI/ML projects can run up big bills on compute. With Spark Operator, you can take advantage of spot instances and dynamic executor allocation, which can deliver big savings. Here's how to very simply set it up in MLRun.
In this article, we will walk you through steps to run a Jenkins server in docker and deploy the MLRun project using Jenkins pipeline.
Here's how to turn your existing model training code into an MLRun job and get the benefit of all the experiment tracking, plus more.