Infuse responsible AI tools and practices in your LLMOps

07

Dec

Infuse responsible AI tools and practices in your LLMOps

This is the third blog in our series on LLMOps for business leaders. Read the first and second articles to learn more about LLMOps on Azure AI.

As we embrace advancements in generative AI, it’s crucial to acknowledge the challenges and potential harms associated with these technologies. Common concerns include data security and privacy, low quality or ungrounded outputs, misuse of and overreliance on AI, generation of harmful content, and AI systems that are susceptible to adversarial attacks, such as jailbreaks. These risks are critical to identify, measure, mitigate, and monitor when building a generative AI application.

Note that some of the challenges around building generative AI applications are not unique to AI applications; they are essentially traditional software challenges that might apply to any number of applications. Common best practices to address these concerns include role-based access (RBAC), network isolation and monitoring, data encryption, and application monitoring and logging for security. Microsoft provides numerous tools and controls to help IT and development teams address these challenges, which you can think of as being deterministic in nature. In this blog, I’ll focus on the challenges unique to building generative AI applications—challenges that address the probabilistic nature of AI.

First, let’s acknowledge that putting responsible AI principles like transparency and safety into practice in a production application is a major effort. Few companies have the research, policy, and engineering resources to operationalize responsible AI without pre-built tools and controls. That’s why Microsoft takes the best in cutting edge ideas from research, combines that with thinking about policy and customer feedback, and then builds and integrates practical responsible AI tools and methodologies directly into our AI portfolio. In this post, we’ll focus on capabilities in Azure AI Studio, including the model catalog, prompt flow, and Azure AI Content Safety. We’re dedicated to documenting and sharing our learnings and best practices with the developer community so they can make responsible AI implementation practical for their organizations.

a man sitting at a table using a laptop

Azure AI Studio

Your platform for developing generative AI solutions and custom copilots.

Mapping mitigations and evaluations to the LLMOps lifecycle

We find that mitigating potential harms presented by generative AI models requires an iterative, layered approach that includes experimentation and measurement. In most production applications, that includes four layers of technical mitigations: (1) the model, (2) safety system, (3) metaprompt and grounding, and (4) user experience layers. The model and safety system layers are typically platform layers, where built-in mitigations would be common across many applications. The next two layers depend on the application’s purpose and design, meaning the implementation of mitigations can vary a lot from one application to the next. Below, we’ll see how these mitigation layers map to the large language model operations (LLMOps) lifecycle we explored in a previous article.

A chart mapping the enterprise LLMOps development lifecycle.
Fig 1. Enterprise LLMOps development lifecycle.

Ideating and exploring loop: Add model layer and safety system mitigations

The first iterative loop in LLMOps typically involves a single developer exploring and evaluating models in a model catalog to see if it’s a good fit for their use case. From a responsible AI perspective, it’s crucial to understand each model’s capabilities and limitations when it comes to potential harms. To investigate this, developers can read model cards provided by the model developer and work data and prompts to stress-test the model.

Model

The Azure AI model catalog offers a wide selection of models from providers like OpenAI, Meta, Hugging Face, Cohere, NVIDIA, and Azure OpenAI Service, all categorized by collection and task. Model cards provide detailed descriptions and offer the option for sample inferences or testing with custom data. Some model providers build safety mitigations directly into their model through fine-tuning. You can learn about these mitigations in the model cards, which provide detailed descriptions and offer the option for sample inferences or testing with custom data. At Microsoft Ignite 2023, we also announced the model benchmark feature in Azure AI Studio, which provides helpful metrics to evaluate and compare the performance of various models in the catalog.

Safety system

For most applications, it’s not enough to rely on the safety fine-tuning built into the model itself. large language models can make mistakes and are susceptible to attacks like jailbreaks. In many applications at Microsoft, we use another AI-based safety system, Azure AI Content Safety, to provide an independent layer of protection to block the output of harmful content. Customers like South Australia’s Department of Education and Shell are demonstrating how Azure AI Content Safety helps protect users from the classroom to the chatroom.

This safety runs both the prompt and completion for your model through classification models aimed at detecting and preventing the output of harmful content across a range of categories (hate, sexual, violence, and self-harm) and configurable severity levels (safe, low, medium, and high). At Ignite, we also announced the public preview of jailbreak risk detection and protected material detection in Azure AI Content Safety. When you deploy your model through the Azure AI Studio model catalog or deploy your large language model applications to an endpoint, you can use Azure AI Content Safety.

Building and augmenting loop: Add metaprompt and grounding mitigations

Once a developer identifies and evaluates the core capabilities of their preferred large language model, they advance to the next loop, which focuses on guiding and enhancing the large language model to better meet their specific needs. This is where organizations can differentiate their applications.

Metaprompt and grounding

Proper grounding and metaprompt design are crucial for every generative AI application. Retrieval augmented generation (RAG), or the process of grounding your model on relevant context, can significantly improve overall accuracy and relevance of model outputs. With Azure AI Studio, you can quickly and securely ground models on your structured, unstructured, and real-time data, including data within Microsoft Fabric.

Once you have the right data flowing into your application, the next step is building a metaprompt. A metaprompt, or system message, is a set of natural language instructions used to guide an AI system’s behavior (do this, not that). Ideally, a metaprompt will enable a model to use the grounding data effectively and enforce rules that mitigate harmful content generation or user manipulations like jailbreaks or prompt injections. We continually update our prompt engineering guidance and metaprompt templates with the latest best practices from the industry and Microsoft research to help you get started. Customers like Siemens, Gunnebo, and PwC are building custom experiences using generative AI and their own data on Azure.

A chart listing responsible AI best practices for a metaprompt.
Fig 2. Summary of responsible AI best practices for a metaprompt.

Evaluate your mitigations

It’s not enough to adopt the best practice mitigations. To know that they are working effectively for your application, you will need to test them before deploying an application in production. Prompt flow offers a comprehensive evaluation experience, where developers can use pre-built or custom evaluation flows to assess their applications using performance metrics like accuracy as well as safety metrics like groundedness. A developer can even build and compare different variations of their metaprompts to assess which may result in the higher quality outputs aligned to their business goals and responsible AI principles.

Dashboard indicating evaluation results within Azure AI Studio.
Fig 3. Summary of evaluation results for a prompt flow built in Azure AI Studio.
A detailed report on evaluation results from Azure AI Studio.
Fig 4. Details for evaluation results for a prompt flow built in Azure AI Studio.

Operationalizing loop: Add monitoring and UX design mitigations

The third loop captures the transition from development to production. This loop primarily involves deployment, monitoring, and integrating with continuous integration and continuous deployment (CI/CD) processes. It also requires collaboration with the user experience (UX) design team to help ensure human-AI interactions are safe and responsible.

User experience

In this layer, the focus shifts to how end users interact with large language model applications. You’ll want to create an interface that helps users understand and effectively use AI technology while avoiding common pitfalls. We document and share best practices in the HAX Toolkit and Azure AI documentation, including examples of how to reinforce user responsibility, highlight the limitations of AI to mitigate overreliance, and to ensure users are aware that they are interacting with AI as appropriate.

Monitor your application

Continuous model monitoring is a pivotal step of LLMOps to prevent AI systems from becoming outdated due to changes in societal behaviors and data over time. Azure AI offers robust tools to monitor the safety and quality of your application in production. You can quickly set up monitoring for pre-built metrics like groundedness, relevance, coherence, fluency, and similarity, or build your own metrics.

Looking ahead with Azure AI

Microsoft’s infusion of responsible AI tools and practices into LLMOps is a testament to our belief that technological innovation and governance are not just compatible, but mutually reinforcing. Azure AI integrates years of AI policy, research, and engineering expertise from Microsoft so your teams can build safe, secure, and reliable AI solutions from the start, and leverage enterprise controls for data privacy, compliance, and security on infrastructure that is built for AI at scale. We look forward to innovating on behalf of our customers, to help every organization realize the short- and long-term benefits of building applications built on trust.

Learn more

The post Infuse responsible AI tools and practices in your LLMOps appeared first on Microsoft Azure Blog.