Looking into 2022: Predictions for a New Year in MLOps

Yaron Haviv | December 29, 2021

In an era where the passage of time seems to have changed somehow, it definitely feels strange to already be reflecting on another year gone by. It’s a cliche for a reason–the world definitely feels like it’s moving faster than ever, and in some completely unexpected directions. Sometimes it feels like we’re living in a time lapse when I consider the pace of technological progress I’ve witnessed in just a year. The cool thing about being in the ML industry for so long is that I have a front row seat to a fascinating market characterized by rapid innovation. 

So before we toast to a new (and better!) year ahead, here are my predictions of what awaits the ML industry in 2022:

From AutoML to AutoMLOps

2022 will be the start of a focus shift from the practice of model creation to a holistic view towards productizing AI. The next steps will be a set of best practices and repeatable MLOps processes that will help small teams roll out complete AI services on an ongoing basis. We’re not talking about putting a notebook into production, but about building services with ML baked-in, that continuously deliver bottom-line business value. 

Until recently, much of the focus was on automating the ML training process, that is, AutoML. But the most significant problem in ML is not about how to find the best algorithm and parameters, but rather about deploying those algorithms as part of an application with business impact. This involves collecting data from operational systems, generating features for training and serving, creating automated model training and deployment workflows, integrating the models into the real-time/interactive business applications, driving meaningful actions from the models, monitoring, and much more. 

The silos that proliferate on ML teams are cumbersome, so a lot of the work in 2022 will be about the transition from a fragmented process to a more holistic MLOps pipeline. Much of the innovation in the space will move into automating and orchestrating the MLOps tasks from data collection, preparation, through training, testing, deployment, monitoring, and retraining.

Feature Store Usage Will Become More Mainstream 

As enterprise AI matures, I predict that feature stores will become a mainstream component of ML tech stacks. What began as high-investment, highly customized internal platforms for ML heavy hitters like Uber, Netflix, and Twitter, is now available off the shelf “for the rest of us”. Now that smaller organizations have access to this critical enabler, 2022 will likely see an expansion of AI adoption across organizations, decreased time to market for AI projects, and, for organizations using online feature stores, an increase in real-time use case rollouts. In a report from May 2021, Gartner also covered the current state and future directions of feature stores.

Measuring the Business Impact of AI Projects Will Become the Norm

Measuring business impact can get very granular with ML applications, and in the coming year I think we’ll see a lot more organizations understand how to measure this. The ROI of an AI application is not determined solely on the model performance, but on the business outcomes of the model predictions. For example, if we identify that a customer is about to churn, what kind of incentive should we give him to stay? When a machine is about to fail, should we stop it now, or perhaps wait for the next maintenance window? More focus will be placed on selecting the proper action and measuring the business impact and ROI of that action, in some cases we will try different actions in parallel and measure which one yields the best business value (we can call this A/B testing for the action). 

In my conversations with enterprise data science teams building their AI strategy, I always advocate for taking a production-first approach, and I see a similar dynamic happening on the business level. Business owners of AI/ML projects will spend 2022 gearing up to taking a business-first approach to ML and MLOps. This means getting all the teams aligned around the business impact of AI projects, and ensuring that what your technical teams are building is not left in the lab. As AI becomes easier to productize, understanding the ROI of these initiatives will become easier as well.  

Real-Time ML Pipelines Will Go Mainstream 

I hear lots of variations on the sentiment, “yes, we want to respond to events in real time but it’s too time intensive for this quarter, so we’ll push that off to next quarter/year”. In previous years, real-time use cases were clearly valuable, but the complexity involved and long time to production forced many teams to scale back their ML services (and therefore their impact) in favor of batch processing. 

Everyone understands that responding immediately to the most recent and relevant data can lead to much greater impact of AI/ML on business results. Blocking a fraud transaction saves far more money than detecting that it happened in the past, making product recommendations based on the recent purchase while the customer is still in the store will convert more shoppers, detecting and alerting on hazards or deteriorating health situations can save more lives, and so on. If the past couple of years has taught us anything, it’s that real-time ML pipelines are becoming a necessity to keep predictions accurate over time. Business is moving ever faster, data shifts are happening a lot more, and luckily technology is catching up. 

Application value is about knowing things as they happen. Delivering product recommendations at just the right time, preventing fraudulent transactions and detecting patient health deterioration when every minute counts all require real-time data inputs, calculations and responses.  Enterprises now need a data pipeline architecture that can handle millions of events at scale, in real time, and they are starting to realize that this technology exists, it’s just a matter of embracing it

Wider Adoption of Composable AI Principles

Another productivity booster that I think we’ll see a lot of ML teams adopt in 2022 is the concept of composable AI. With this approach, ML teams compose pipelines with pre-built components that automate repetitive tasks from training to production. From data preparation, to testing, to deployment and monitoring, ML teams can implement composability throughout the pipeline. Moving to a composable ML/AI architecture is an inevitable step as the complexity of ML pipelines increases, driving ML teams to increase collaboration and reuse. 

With Composable AI, functions can be selected from a public or local marketplace, or custom functions can be written and published. Functions can consume or produce parameters and data/model artifacts are all tracked along the pipeline. The functions, pipelines, and artifacts can be organized in projects and integrated with source control systems such as Git.

Looking into 2022, I believe this will be the year of MLOps maturity, where we will really see MLOps play a major part in the innovation strategy of an increasing number of enterprises, across verticals. I look forward to what I’m sure will be a very interesting year for our field. 

In the meantime, I wish you and yours a very happy and healthy new year from all of us at Iguazio.

New call-to-action