MLOps Live

Join our webinar on Improving LLM Accuracy & Performance w/ Databricks - Tuesday 30th of April 2024 - 12 noon EST

Accelerating ML Deployment in Hybrid Environments

Alexandra Quinn | February 8, 2021

We’re seeing an increase in demand for hybrid AI deployments. This trend can be attributed to a number of factors. First of all, many enterprises look to hybrid solutions to address data locality, in accordance with a rise in regulation and data privacy considerations. Secondly, there is a growing number of smart edge devices powering innovative new services across industries. As these devices generate volumes of complex data, which often needs to be processed and analyzed in real time, IT leaders must consider how—and where—to process that data. 

For many use cases, migrating data sets from on-premises or edge environments to the cloud can be inefficient, impractical, or prohibitively expensive. Deploying and managing ML on the edge also presents its own challenges. That’s why a hybrid approach may be right for many enterprises. 

Edge ML is an attractive option for data teams looking to analyze the data where it is generated or reduce reliance on cloud networks, but there are drawbacks. There can be multiple compute levels between the point where data is generated and where it will eventually reside. And in many situations, privacy, regulatory, security or contractual issues require that data be stored locally at the edge, while production data is sent to the application in the cloud. 

To handle these complexities, enabling the deployment of ML on both edge and cloud is often preferred. But this type of setup in turn introduces a new set of challenges. Operationalizing machine learning is a complex process, and becomes even more so when hybrid environments are involved. This growing complexity calls for a solution that can abstract away the orchestration, management and automation of ML pipelines. The solution must enable total separation of the data path from the control path and must be delivered by a highly available, scalable entity that orchestrates, tracks and manages ML pipelines across cloud and edge deployments.

Also, the ideal solution would abstract away the data layer and enable preprocessing and analysis of data streams in real time, whether it’s at the edge where data is generated or in the cloud for more complex use cases.

To learn more about simplifying deployment of ML in federated cloud and edge environments, watch this on-demand MLOps Live session here.

Operationalizing Machine Learning in Hybrid Environments

Operationalizing machine learning in a single environment comes with several challenges. Doing so across a hybrid environment while ensuring speed, and performance significantly increases these challenges, especially for use cases in which AI needs to be deployed at scale across the entire organization.

One major challenge of running machine learning in hybrid environments is the wide variety of data science tools involved in the process, which make it difficult to streamline development and deployment across both cloud and edge.

DevOps practitioners, data engineers and data scientists all use different tools. These tools are often domiciled across different environments, which makes collaboration and deploying projects to production that much more complicated. When data is located across data lakes on the cloud, on-premise (or both), or in real-time streams coming from sensors, data and feature access becomes a critical issue for multi-functional teams.

Handling Data in Hybrid Environments

The entire process of taking an ML project from scratch to deployment and management in production can be broadly segmented into three phases:

  1. Research
  2. Production
  3. Monitoring and governance

During the first phase, training data typically comes from data lakes. In production, data often comes from live streams and operational databases which then needs to be processed and served. This is problematic in instances where training data is in a data lake but models need to be deployed on the edge or on local data centers due to any number of reasons, including data privacy concerns.

As such, one of the most critical challenges in operationalizing machine learning is handling and moving data around. A typical machine learning pipeline in production involves data collection from a variety of sources, including ETLs from traditional databases or data lakes, real-time streams from sensor data, or web logs. 

While there are several tools for automating and streamlining model training, the real issues are analyzing and processing the disparate multi-model data that has been collected and generated using different tools and practices across different environments.

Also, it's essential to continuously verify model accuracy to prevent drift through constant tracking and monitoring of models to ensure that training and serving data remain largely identical, which can be quite difficult in a distributed environment.

Solving these challenges requires a seamless way to harmonize and handle data uniformly, irrespective of data sources, APIs, tools or methodologies in use, and this is where the concept of a federated feature store is helpful. A feature store catalogs and manages all features, whether they are in the cloud, on-premise or are batched in real-time streams.

Iguazio and AWS Outposts

Iguazio recently achieved AWS Outposts Ready designation, enabling the implementation of local AWS Outposts environments within the Iguazio Data Science Platform. The seamless integration of Iguazio’s platform with AWS Outposts gives enterprises more flexible deployment options for productionizing ML in hybrid environments.

Amazon Outposts is a fully managed service that brings the power of AWS infrastructure, services, APIs and tools to virtually any data center co-location space or on-premise facility of choice.

With Outposts, ML teams can increase their productivity and innovation by taking the same level of development, deployment and operational efficiency that they already enjoy in the cloud and extending it to workloads that need to run locally or in environments where there’s no AWS facility/capability and leveraging the results back to the region as needed, to ensure a truly consistent development experience.

The Iguazio Data Science Platform provides an extremely fast, real-time, multi-model data layer and a high-performance serverless framework and full ML pipeline orchestration, enabling the automation of offline or real-time ML pipelines. It also comes with a built-in feature store and model monitoring and governance capabilities, including automated triggering of re-training for models that have drifted. 

ML models at the edge have very low latency (single-digit millisecond) requirements. Outposts enables edge deployment applications to meet those latency requirements while speeding up time-to-market for enterprise use cases.

Combining Iguazio’s automated MLOps solution with its high-performance serverless framework with Outposts’ fully managed service offering, provides teams with a truly consistent hybrid development-deployment experience.

This setup helps enterprises speed up and simplify the deployment of data science to hybrid environments and to accelerate workloads by providing the ability to process data locally before sending it back up to the cloud. This setup also gives ML teams across the entire organization access to the same, reliable, secure, high-performance infrastructure, while keeping pace with the speed of innovation and ensuring the same operational consistency obtainable in AWS.

A truly holistic solution for operationalizing ML in hybrid environments should allow teams to develop in the cloud and deploy to Outposts or vice versa on a single stack.

Operational MLOps Stack for Cloud and Edge

Operational MLOps Stack for Cloud and Edge

For an ML stack where pipelines need to be built on the edge or an on-premise location, the hardware and lower-level components are provided by AWS or Outposts, on top of which you have EKS (which is essentially distributed Kubernetes) together with all the different storage and VM APIs. 

On top of that, a set of data services runs microservices and machine learning. This is where time-series databases, streaming and message queues, NoSQL tables, and a distributed file system come in—all of which are integrated within Iguazio’s real-time data layer.

This is the data processing and storage engine on top of which you can essentially run all those different services that you need, to use analytics and machine learning for MLOps and DevOps. This approach allows you to layer in the higher-level functionality of a feature store, which could be federated and connected to the cloud together with an orchestration layer for pipelines for CI/CD in real time as well as full lifecycle management. 

Orchestration should also manage:

  • the federated environment
  • the deployment of the models from cloud to edge (or vice versa)
  • automated pipelines
  • the CI/CD process

MLRun, Iguazio’s open-source automated and scalable MLOps orchestration framework, does all of this beautifully.

The joint solution between AWS and Iguazio can take your code and, with a single function, build the containers and deploy it to the edge location with a single command, while simultaneously providing the tools to automate model monitoring, management and governance.

Wrapping Up

With an operational MLOps stack, teams can develop full-blown ML pipelines quickly and deploy them across cloud and edge while meeting compliance, scale and performance requirements.

This is great news for Iguazio and AWS customers who develop AI models using Amazon SageMaker. ML projects can now be deployed (and managed) in hybrid production environments using Iguazio on AWS cloud / AWS Outposts.

Iguazio on AWS Outposts is great for data residency use cases, to handle sensitive data such as bank records, medical records, personal information and so on, while adhering to strict regulations and privacy concerns.With Outposts, sensitive records can be housed within the country, state or municipal area while allowing processing and operations to be performed locally on the data.

Deploying AI on local AWS Outposts environments using the Iguazio platform provides a simple way for ML teams to work (and leverage the same APIs and tools) across hybrid cloud and edge environments, without compromising on speed or performance.

Start Your Journey: Book a Live Demo.