ML Ops Framework Setup : 12-Week Implementation

Tredence Inc

Automated ML model management (MLOps) to generate higher RoI on Data Science investments and increase the Business User’s confidence in analytical insights

Objective: Setup an end-to-end MLOPs pipeline for upto 6 ML models and enable clients with a clear MLOPs framework to onboard newer ML models to production going forward.

Key Challenges Addressed:

  1. Model deployment takes days if not months
  2. Data/model drift cause models to become ineffective over time
  3. No centralized way to measure model performance
  4. Poor model performance due to lack of testing

Outcome:

  1. A centralized model monitoring system
  2. A Visual provenance graph to track model execution.
  3. A streamlined model testing framework
  4. Automated and standardized model deployment

Implementation Plan The break-up of the implementation plan is as below: • Week 1-2: Spent on ‘discovery’ to understand the business and ML models, data sources and downstream applications. • Week 3-6: Integration of model pipelines and drift calculation for two models, and setup of model testing framework. • Week 7-9: Model drift calculation for three models and activation of visual provenance graph. The CI/CD pipeline for model deployment using Azure DevOps is also created during this time. • Week 10-12: Drift calculation and visual provenance graph for all models, centralized model monitoring, documentation and a MLOps roadmap for the future.

This implementation uses the following native Azure components: • Azure Git: Allowing changes to the repository in a controlled way, allowing coordination between many people without accidentally overwriting or corrupting files • App Services: The monitoring web app and python backend code is hosted on azure Linux app services. Both the apps can be scaled automatically or manually on demand. • Microsoft Azure Data Factory: Used to fetch the status information of Data factory pipelines to track. • Databricks Workspace: MLFlow component of Databricks is used to fetch the data stored by notebook during execution. • Cosmos DB: With the flexibility of schema and changing nature of data, NoSQL helps accommodate requirements.

https://store-images.s-microsoft.com/image/apps.10564.066b3efc-86c7-4bc5-8113-ba505872344d.c1adcd23-8e4a-4211-a0bb-9ee6c6818539.cd6d2b7e-c0ff-4f17-95dd-2b8d6baa90b5
/staticstorage/730bee6/assets/videoOverlay_7299e00c2e43a32cf9fa.png
https://store-images.s-microsoft.com/image/apps.10564.066b3efc-86c7-4bc5-8113-ba505872344d.c1adcd23-8e4a-4211-a0bb-9ee6c6818539.cd6d2b7e-c0ff-4f17-95dd-2b8d6baa90b5
/staticstorage/730bee6/assets/videoOverlay_7299e00c2e43a32cf9fa.png
https://store-images.s-microsoft.com/image/apps.303.066b3efc-86c7-4bc5-8113-ba505872344d.c1adcd23-8e4a-4211-a0bb-9ee6c6818539.537ba6f7-b621-4798-81a5-dac821b7759f