News

Iguazio Named a Leader and Outperformer In GigaOm Radar for MLOps 2022

#MLOPSLIVE WEBINAR SERIES

Session #20

How to Easily Deploy Your Hugging Face Model to Production at Scale

Share:

Seems like almost everyone uses Hugging Face to simplify and reuse advanced models and work collectively as a community.

But how do you deploy these models into real business environments, along with the required application logic? How do you serve them continuously, at scale? How do you manage their lifecycle in production (deploy, monitor retrain)?

Oh, there’s a tool for that 😉

MLRun is an open source MLOps orchestration framework that enables you to automate deployment and management of your Hugging Face models in production.

Join us for this technical session and learn how to:

  1. Use GitHub Codespaces with MLRun to quickly develop, test, and deploy Hugging Face models with zero configuration
  2. Build an application pipeline which incorporates your Hugging Face model, leveraging the MLRun serving graph and Gradio as a front-end.
  3.  Fine-tune and retrain your model using GPUs with MLRun, then push the model back Hugging Face to both Model Registry and Hugging Face Spaces.

We’ll leave plenty of time for you to ask us your questions live in the interactive Q&A at the end.

Can’t join the live session? Register to receive the recording and watch at your convenience.