This is not another high-level webinar series about AI. These sessions go beyond theory, with industry leaders sharing practical advice and demonstrating how they’ve made real business impact by bringing data science to life.
The MLOps Live Webinar Series- How Seagate Runs Advanced Manufacturing at Scale
Seagate is a global leader in data storage. Data is in their DNA, but data engineering is extremely complex, and even the talented Seagate team faced numerous challenges around data engineering at scale. In this MLOps Live webinar session, Seagate shared their story of how they successfully tackled their challenges to handle continuous data engineering at scale while keeping costs low and productivity high.
Watch Julien Simon (Hugging Face), Noah Gift (MLOps Expert) and Aaron Haviv (Iguazio) discuss how you can deploy models into real business environments, serve them continuously at scale, manage their lifecycle in production, and much more in this on-demand webinar!
In this webinar we discuss how S&P Global (IHS Markit) built a sophisticated and automated NLP pipeline that works at scale, making massive amounts of PDF documents searchable and indexable.
In this webinar, we discuss the challenges associated with online feature engineering across training and serving environments, how feature stores enable teams to collaborate on building, sharing and managing online and offline features across the organization.
In this technical training session, we explore how to run a distributed cloud or edge application on Amazon Cloud and Outposts with the Iguazio MLOps Platform.
Siemens AG's Data Analytics Solutions Expert, Vijay Pravin Maharajan, gives a deep dive into storytelling with data and how to make sure all your hard work on developing the right model and building the full-blown ML pipeline also result in a visually pleasing and simple-to-read visual dashboard for quick internal and external adoption.
Tony Paikeday, Senior Director of AI Systems of NVIDIA, discusses how enterprises need a platform that brings together tools to streamline data science workflow with leading edge infrastructure that can tackle the most complex ML models — one that can bring innovative concepts into production sooner, integrated within your existing IT or DevOps-grounded approach.
Yaron Haviv speaks with Netapp’s Senior Technical Director AI & Data Engineering about constructing ML pipelines across federated data and compute environments, and building production-ready AI applications that work in hybrid environments (on-prem, cloud and edge).
Product Madness' Head of Data Science discusses how technology and new work processes help the gaming and mobile app industries predict and mitigate 1st-day (or D0) user churn in real time, and explore feature engineering improvements to the RFM (Recency, Frequency, and Monetary) churn prediction framework: The Discrete Wavelet Transform (DWT).
Ecolab’s Data Science Director discusses how his company is accelerating the deployment of AI applications by using new MLOps methodologies, leveraging microservices and building easier processes for the teams to create a culture of collaboration.
The Sr. Director of Active IQ AI and Data Engineering explains how NetApp migrated from Hadoop to a microservices, Kubernetes, serverless environment, and built a solution for predictive maintenance and actionable intelligence that responds in real time to 10 trillion data points collected from storage controllers globally per month.
David Aronchick, Head of OSS ML Strategy at Microsoft, Marvin Buss, Azure Customer Engineer at Microsoft, and Zander Matheson, Senior Data Scientist at Github discuss using Git to enable continuous delivery of machine learning to production, enable controlled collaboration across ML teams, and solve rigorous MLOps needs.
Iguazio’s Data Scientist discusses how to detect and handle problems that arise when models lose their accuracy and how to implement concept drift detection and remediation in production. He shows how to automate MLOps processes at scale, to handle drift detection using open source tools.
Quadient’s Director of DXP Innovation shares how industry leaders save costs and get to market faster by streamlining the operationalization of ML, how to run Al models in production at scale without scaling your costs, and how to reduce time and resource consumption by leveraging ML pipeline automation and open source technology.
In the very first episode of the series, Yaron Haviv speaks with S&P Global’s Director of Analytics & ML Engineering about how industry experts are operationalizing ML, how to map a business problem into an automated ML production pipeline, and how to run AI in production at scale.