In this demo we showcase how to use LLMs to turn audio files of conversations between customers and agents at a call center into valuable data in a single workflow, all orchestrated by MLRun.
MLRun automates the entire workflow, auto-scales resources as needed and automatically logs and parses values between the various workflow steps.
By the end of this demo you will see the power of leveraging LLMs for feature extraction, and how easy it is to do with MLRun!
We will use:
- OpenAI’s Whisper – To transcribe the audio calls into text.
- Flair and Microsoft’s Presidio – To recognize PII for filtering it out.
- HuggingFace – as the main machine learning framework to get the model and tokenizer for the features extraction. The demo uses tiiuae/falcon-40b-instruct as the LLM to answer questions.
- and MLRun – as the orchestrator to operationalize the workflow.
The demo contains a single notebook that covers the entire demo.
Most of the functions are being imported from MLRun’s function hub – a wide range of functions that can be used for a variety of use cases. You can find all the python source code under /src and links to the used functions from the hub in the notebook.