MLOps Live

Join our webinar on Improving LLM Accuracy & Performance w/ Databricks - Tuesday 30th of April 2024 - 12 noon EST

Static Deployment vs. Dynamic Deployment: What’s the Difference?  

In Static Deployment, the model is trained offline, with batch data. The model is trained once with features generated from historical batch data, and that trained model is deployed. In static deployment, model training is done on a local machine and then saved and transferred to a server to make live inferences. All predictions are precomputed in batch, and generated at a certain interval (for example, once a day, or once every 2 hours). Examples of use cases where this type of deployment is appropriate are situations like content-based recommendations (DoorDash’s restaurant recommendations, Netflix’s recommendations circa 2021)

In Dynamic Deployment, the model is trained online, or with a combination of online and offline features. Data continually enters the system and is incorporated into the model through continuous updates. In this deployment scenario, predictions are made on-demand, using a server. The model is deployed using a web framework like FastAPI or Flask and is offered as an API endpoint that responds to user requests, whenever they occur. Examples of use cases where this type of deployment is used are real-time fraud detection and prevention, delivery or logistics time estimates, and real-time recommendations. 

Need help?

Contact our team of experts or ask a question in the community.

Have a question?

Submit your questions on machine learning and data science to get answers from out team of data scientists, ML engineers and IT leaders.