This is not another high-level webinar series about AI. These sessions go beyond theory, with industry leaders sharing practical advice and demonstrating how they’ve made real business impact by bringing data science to life.
The MLOps Live Webinar Series- How to Build an Automated AI ChatBot
In this MLOps Live session, Gennaro, Head of Artificial Intelligence and Machine Learning at Sense, described how he and his team built and perfected the Sense chatbot, what their ML pipeline looks like behind the scenes, and how they have overcome complex challenges such as building a complex natural language processing ( NLP) serving pipeline with custom model ensembles, tracking question-to-question context, and enabling candidate matching.
In this MLOps Live webinar session, Sense shares how they built an AI chatbot, overcoming complex challenges such as building a complex natural language processing ( NLP) serving pipeline with custom model ensembles, to track question-to-question context and enable sentiment tracking.
In this MLOps Live webinar session, Seagate shared their story of how they successfully tackled their challenges to handle continuous data engineering at scale while keeping costs low and productivity high.
In this session, Yaron and Jiri will be sharing enterprise secrets to establishing efficient systems for ML/AI - including building ML pipelines, leveraging a feature store for ML feature sharing and reuse, and automating the entire data science process to take repetitive manual work out of the equation.
Watch Julien Simon (Hugging Face), Noah Gift (MLOps Expert) and Aaron Haviv (Iguazio) discuss how you can deploy models into real business environments, serve them continuously at scale, manage their lifecycle in production, and much more in this on-demand webinar!
In this webinar, we discuss the challenges associated with online feature engineering across training and serving environments, how feature stores enable teams to collaborate on building, sharing and managing online and offline features across the organization.
Siemens AG's Data Analytics Solutions Expert, Vijay Pravin Maharajan, gives a deep dive into storytelling with data and how to make sure all your hard work on developing the right model and building the full-blown ML pipeline also result in a visually pleasing and simple-to-read visual dashboard for quick internal and external adoption.
Tony Paikeday, Senior Director of AI Systems of NVIDIA, discusses how enterprises need a platform that brings together tools to streamline data science workflow with leading edge infrastructure that can tackle the most complex ML models — one that can bring innovative concepts into production sooner, integrated within your existing IT or DevOps-grounded approach.
Yaron Haviv speaks with Netapp’s Senior Technical Director AI & Data Engineering about constructing ML pipelines across federated data and compute environments, and building production-ready AI applications that work in hybrid environments (on-prem, cloud and edge).
Product Madness' Head of Data Science discusses how technology and new work processes help the gaming and mobile app industries predict and mitigate 1st-day (or D0) user churn in real time, and explore feature engineering improvements to the RFM (Recency, Frequency, and Monetary) churn prediction framework: The Discrete Wavelet Transform (DWT).
Ecolab’s Data Science Director discusses how his company is accelerating the deployment of AI applications by using new MLOps methodologies, leveraging microservices and building easier processes for the teams to create a culture of collaboration.
The Sr. Director of Active IQ AI and Data Engineering explains how NetApp migrated from Hadoop to a microservices, Kubernetes, serverless environment, and built a solution for predictive maintenance and actionable intelligence that responds in real time to 10 trillion data points collected from storage controllers globally per month.
David Aronchick, Head of OSS ML Strategy at Microsoft, Marvin Buss, Azure Customer Engineer at Microsoft, and Zander Matheson, Senior Data Scientist at Github discuss using Git to enable continuous delivery of machine learning to production, enable controlled collaboration across ML teams, and solve rigorous MLOps needs.
Iguazio’s Data Scientist discusses how to detect and handle problems that arise when models lose their accuracy and how to implement concept drift detection and remediation in production. He shows how to automate MLOps processes at scale, to handle drift detection using open source tools.
Quadient’s Director of DXP Innovation shares how industry leaders save costs and get to market faster by streamlining the operationalization of ML, how to run Al models in production at scale without scaling your costs, and how to reduce time and resource consumption by leveraging ML pipeline automation and open source technology.
In the very first episode of the series, Yaron Haviv speaks with S&P Global’s Director of Analytics & ML Engineering about how industry experts are operationalizing ML, how to map a business problem into an automated ML production pipeline, and how to run AI in production at scale.