MLOps is an automation-first approach that brings together the people, process and technology to enhance cross-team collaboration on machine learning (ML) projects. It also streamlines the iteration, production, deployment and operation of ML models. MLOps is a broad concept with many different implications for your organization. Whether you're still determining if you're ready for MLOps, need help getting started, or want to ramp up the skills of your data scientists, we're here to help.

When designing an MLOps implementation, it's important to consider the impact across your people, processes and technology. This article squarely focuses on the technology component, making it easier to identify the tools likely to work best with your IT environment and requirements.

Top considerations for decision-making

Before diving into the world of MLOps, organizations must first develop an understanding of the following considerations, each critical to MLOps success:

  1. Their desired business outcomes for MLOps.
  2. The specific requirements of their current data science environment.
  3. The level of effort required to implement their MLOps vision.

1. Understanding desired business outcomes

Building an MLOps environment is like building with LEGO blocks. Like individual LEGO pieces, each MLOps tool has a different purpose and capability. Understanding the desired business goals of MLOps will empower your team to choose the best tools for the job, ensuring your solution is implemented as seamlessly as possible while aligning with your business needs.

Once established, it's crucial to constantly reevaluate your MLOps environment and business goals. This will enable you to identify continual improvement opportunities and ensure your needs are always met. For example, if your organization already has an existing MLOps model but no way to validate or monitor results, then tools that excel at model validation and monitoring, such as Datatron, SAS and TensorFlowExtended (TFX), could be the best way to keep growing your MLOps capabilities.

2. Ensuring compatibility with data science environments

When building with LEGO blocks, the main consideration is whether the pieces you're using fit together. The same is true in MLOps. When building a data processing model, all parts must fit and work together seamlessly for the best possible results. For example, if an organization runs heavily in a public cloud such as AWS or Azure, it will likely require MLOps tools that can run in this environment while working together with other tools such as MLFlow, TFX, Kubernetes or other cross-platform tools. Just like you wouldn't lock two data scientists in separate rooms with no way to communicate, you don't want MLOps tools unable to talk to each other.

It's likely that MLOps won't cover the entirety of an organization's data science environment. So it's also important to understand which parts of the data science environment need MLOps tools and which can be run by a simple, more traditional data science models. Determining which parts are priorities for increased automation should aide this decision.

3. Evaluating current level of expertise

Understanding the complexity of each MLOps tool is another important consideration. Some tools, especially open-source ones, have a much higher barrier to entry than paid tools. If your organization has a high-level of ML expertise available, open-source tools might be the best option. However, if ML expertise is lacking, it might be worth considering a paid tool.

Paid tools offer the benefit of providing a streamlined solution pre-packaged with other tools. Open-source tools are more like designing custom LEGO models in that they take more experience and knowledge to build, but provide a more bespoke environment to match specific business needs.

Six key features of MLOps tools

With everyone clamoring to break into this market, you might be dazzled by the range of MLOps tools available. You might be asking, "What are the key offerings to consider when making the decision? Which tools really excel in each area?" 

There are six core capabilities MLOps tools can provide, explored in more detail below:

  1. Data management
  2. Model versioning and storage
  3. Model training and deployment
  4. Model validation
  5. Continuous integration & continuous delivery (CI/CD)
  6. Model monitoring
Diagram

Description automatically generated
Figure 1. MLOps pipeline

1. Data management

Data lies at the heart of any ML project. Just as Figure 1 shows, you will always need to preprocess your data first, regardless of the fancy algorithms and models you want to develop. Specifically, data extraction, validation and preparation are all necessary steps that will support later modeling efforts. Without proper data exploration and processing, it is almost impossible for algorithms to learn the mapping between input data and target variables to enable the business outcomes you are seeking. Exampels of MLOps tools and platforms that excel at this include Azure ML and TFX.

2. Model versioning and storage

The output of ML projects is the result of the repeated iteration of the code and model as well as the interaction of multiple components, including data, code, model and, in some cases, meta-information like hyperparameters. In the case of an error or flaw, data scientists have to constantly trace back and correct each version of the code. By properly implementing code versioning and storage, data scientists will be able to retrain the model and reproduce the output over time. In a word, versioning and storage are key to the reproducibility of ML, which helps overcome human error in the experimental process.

If you are operating a transnational business, you need to factor into consideration regulations and compliance. You need to account for such rules and laws such as the California Consumer Privacy Act of 2018. (CCPA) and the General Data Protection Regulation (GDPR), and maintain a clear lineage of models deployed into your production environment. MLOps tools with a model versioning and storage offering can tag and document the exact data and models that have been deployed, which can help with audits compliance. 

Current MLOps tools with this capability include MLFlow, GCP AI Hub, SageMaker, Domino Data Science Platform, and Kubeflow Fairing.

3. Model training and deployment

Modeling and deploying is an iterative process that involves various stakeholders across different roles, systems, tools and environments. However, for businesses at Level 1 of the ML Maturity Curve, this step may be manual because your automated ML pipeline hasn't been set up yet. Friction will result from manual integration of the technical package during deployment, which could jeopardize the stability of the environment. MLOps tools offer a way to streamline the modeling and production deployments and easily scale this activity. 

In terms of specific tools, Google Cloud AI Platform, SageMaker, Domino Data Science Platform, TFX, and Kubeflow Pipelines all provide this functionality.

4. Model validation

Model validation is usually performed in tandem with model development. It is measured using statistical metrics, which are quantitative measures that evaluate predictions against observations. Some examples include confusion matrices, F1 scores and AUC (Area Under the Curve) - ROC (Receiver Characteristic Operator) curves. If a model fails to reach the statistical metrics with new data, it will go back to the development phase. Validation matters because it helps minimize bias and enhance model explainability before it is deployed to the production environment. Testing the model against new data can help ensure the model performs as expected.

A number of MLOps tools feature this capability, including Datatron, SAS and TFX.

5. Continuous Integration & Continuous Delivery (CI/CD)

Once your MLOps model has been designed and delivered, you will find it is constantly being modified and updated. These changes need to be integrated and delivered as quickly and seamlessly as possible, which is where CI/CD comes in. CI/CD uses introduce automation to solve this challenge.

Continuous integration ensures that changes made to your model are constantly tested and merged, solving the classic problem of too many cooks in the kitchen (or too many developers in the code). Once changes have been consolidated, continuous delivery ensures the most updated version of the model is automatically uploaded to a shared repository and delivered to production. This minimizes the efforts associated with delivering new code, and increases visibility between management and development teams.

Together, CI and CD form what is known as the CI/CD pipeline, a key element in the lifecycle of any application that requires constant updates. Some CI/CD tools include: Google Cloud Build, AWS CodePipeline, Azure DevOps, Gitlab and Jenkins.

6. Model monitoring 

In an ever-changing world, it is crucial to monitor day-to-day operations and track metrics to ensure the accuracy of model performance in production. As the training data fed into the model evolves over time, the model can be susceptible to drift. The model output therefore might no longer reflect the actual situation, resulting in outdated and misleading predictions. 

MLOps tools can help monitor and prevent drift and its repercussions, saving your data scientists the time and energy of constantly comparing live traffic against the baseline results. 

Examples include SageMaker Pipelines, Domino Model Monitor, Datatron, TFX and Kubeflow Metadata.

Start the exciting journey to MLOps maturity

MLOps is more than code and tools — it involves the implementation of an entire system. To select the right MLOps tools for your business, you must define your business needs and goals, and assess your current data science landscape to identify where MLOps can make a difference.

Stay tuned for articles that take a deeper dive into the capabilities of certain MLOps tools. Once you feel ready to take the first step, WWT can help your organization develop a plan for building a successful MLOps environment.

References

  1. https://www.wwt.com/article/getting-started-with-mlops-the-value-proposition
  2. https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning#continuous_integration
  3. https://www.datarobot.com/mlops-101/#:~:text=MLOps%20allows%20for%20a%20production,models'%20lifecycle%20as%20you%20scale.
  4. https://towardsdatascience.com/how-to-detect-model-drift-in-mlops-monitoring-7a039c22eaf9
  5. https://towardsdatascience.com/why-is-model-validation-so-darn-important-and-how-is-it-different-from-model-monitoring-61dc7565b0c
  6. https://www.redhat.com/en/topics/devops/what-is-ci-cd
  7. https://www.redhat.com/en/topics/devops/what-cicd-pipeline