From enhancing medical screening to detecting fraudulent transactions, AI is radically changing the way businesses use their data to drive value. However, as new AI tools spread across industries, they generate more than just solutions, with an ever-growing list of ethical concerns emerging as AI is adopted in new arenas.

Machine learning operations (MLOps) is a framework for observing and remediating ethical concerns as part of the AI development cycle. In this article, we'll briefly discuss why responsible AI is important, what we mean by MLOps, and how MLOps can help organizations develop and deploy AI models consistent with their ethical standards.

Ethical challenges in AI advancement

Responsible AI is more than an academic exercise. Neglecting to implement responsible AI practices can cause real harm. The impact can range from perpetuating biases in model data to generating unreliable information for research and beyond.

For example, in 2018 the Gender Shades project demonstrated that a specific facial recognition technology failed far more often for users from marginalized demographics. A 2016 ProPublica study revealed that the COMPAS algorithm, a tool used by judges to predict whether defendants should be detained or released on bail pending trial, exposed biases against certain racial groups. 

These kinds of concerns apply to all forms of AI trained on natural data (i.e., datasets gathered from the real world rather than generated synthetically â€” including datasets used to train large language models (LLMs) like ChatGPT, image analysis tools, and ML models trying to describe and predict complex relationships between variables. 

Implementing MLOps enables increased model surveillance to catch and correct model behavior before the model reaches users. 

MLOps lifecycle: A framework for responsible AI

At its core, MLOps is a series of steps that span the entire machine learning (ML) development lifecycle:

  1. Discovery: Gathering data and use cases.
  2. Model training: Analyzing data and initial model training.
  3. Model validation: Checking the trained model against readiness metrics and standards.
  4. Production: Deploying the model to realize business value.
  5. Model refresh: Maintaining model compliance and accuracy.

Analogous to DevOps, MLOps iterates through these steps to develop reliable AI models efficiently. The five stages of MLOps collectively contribute to an adaptable framework where each step is clearly visible and closely monitored, which is crucial for developing AI systems that behave in ways that are consistent with ethical standards.

Applying responsible AI practices through MLOps means taking advantage of the increased monitoring and automated stage gates of the MLOps approach to keep models on track toward ethical compliance. In practice, this means adding steps like implementing human-designed data collection plans during Discovery and incorporating human-calculated metrics during the Model Validation stage to uncover model biases. These oversight steps leverage the transparency and agility of MLOps to add ethical guardrails to the model development process. These guardrails are advantageous to businesses because they can proactively reduce the risk of downstream ethical issues without interrupting the automated and iterative processes of technical model improvement.

Once an ethical concern arises — for example, say a calculated metric detects that a facial recognition tool is consistently failing more often for a particular demographic — MLOps enables intervention at any stage in the development lifecycle.

When a development team identifies an ethical concern before model deployment, ethical compliance can be built into the model during the Discovery and Model Training phases. For example, when using data related to people, training models on data where certain groups are underrepresented raises an ethical concern — such models may experience a higher failure rate when they interact with new data related to those underrepresented groups after deployment. In response, engineers could make a plan to assemble a diverse and inclusive data pool before model training begins. Data analysis tools that detect biased data can be built into the MLOps pipeline, ensuring that the data used is sufficiently representative of various subpopulations.

Ethical concerns can also arise after model deployment. The Model Refresh stage in the MLOps lifecycle is designed to smoothly initiate repair and re-training of the model to fix biased or unethical model behavior. However, even a thoughtfully trained and well-maintained model can topple into the realm of unethical behavior through a process known as model "drift." Drift occurs when the underlying relationships between inputs and outputs in the model evolve as society, the environment and other parts of the real-world shift change. To mitigate bias here, ongoing evaluations are essential to detect any drift. MLOps enables regular tracking of performance metrics, making it easier to spot new concerns. Such concerns can then trigger the automatic initiation of a new round of data collection, model retraining, and automated testing and quality checks. This streamlines the deployment of each model version into the desired service environment, minimizing the time the biased version might remain in operation, or be down altogether.

Implementing and maintaining responsible AI solutions

These successful ethical interventions in MLOps depend on consistent collaboration between data scientists, engineers and other business units. All these groups can collectively invest in the reliability and reputation of their AI systems by aligning with established ethical guidelines and defining how and when compliance is measured as new AI tools are developed and deployed. 

However, implementing ethical initiatives into tangible solutions is no simple feat. Meeting an organization's specific responsible AI needs requires securing access to the appropriate resources, expertise and testing environments to transform intention into real action. This is where WWT's Advanced Technology Center (ATC) steps in. Our ATC is a digitally accessible lab environment that gives clients and partners a minimal-risk place to:

  • Experiment with the latest AI hardware, software and reference architectures in the new AI Proving Ground.
  • Explore how new technologies and processes will integrate with existing infrastructure.
  • Unlock new possibilities by turning bright ideas into robust, rigorously tested solutions.

At the end of the day, organizations need to deliver information people can trust and tools that are effective and unbiased. The ATC serves as a crucial bridge between the ever-expanding world of responsible AI considerations and the practical deployment of cutting-edge solutions. 

For more information about our MLOps practice, you can find details about our MLOps accelerator here, or reach out to WWT's MLOps practice leads: Amulya Shanker and Mike Catalano.