It’s a wrap! We had a full house at MLOps NYC, Iguazio’s annual conference about managing and automating machine learning pipelines in order to bring data science into business applications. With an outstanding caliber of speakers and audience, the conference went beyond theory, shedding light on painful and successful machine learning experiences which involve running experiments at scale, versioning,…
Modernize your IT Infrastructure Monitoring by Combining Time Series Databases with Machine Learning
Let’s explore the complexity and vulnerability of IT infrastructure and how to build a modern IT infrastructure monitoring solution, using a combination of time series databases with machine learning.
Today we all choose between the simplicity of Python tools (pandas, Scikit-learn), the scalability of Spark and Hadoop, and the operation readiness of Kubernetes. We end up using them all.
You’ve played around with machine learning, learned about the mysteries of neural networks, almost won a Kaggle competition and now you feel ready to bring all this to real world impact. It’s time to build some real AI-based applications.
Ever wonder if it’s possible to train machine learning (ML) models with regulated data which can’t be sent to the cloud? Has your edge solution gathered so much data that it just doesn’t make sense to send it all to
With all the turmoil and uncertainty surrounding large Hadoop distributors in the past few weeks, many wonder what’s happening to the data framework we’ve all been working on for years?
We use a Nuclio serverless function to “listen” to a Kafka stream and then ingest its events into our time series table.
Still waiting for ML training to be over? Tired of running experiments manually? Not sure how to reproduce results? Wasting too much of your time on devops and data wrangling?
Yaron Haviv explains serverless and its limitations, providing a hands-on example of using a serverless architecture to simplify data science development and accelerate time to production for data collection, exploration, model training and serving.
Imagine a system where one can easily develop a machine learning model, click on some magic button and run the code in production without any heavy lifting from data engineers…