ai solutions
32 TopicsYour First GraphRAG Demo - A Video Walkthrough
Overview Graph Retrieval-Augmented Generation (GraphRAG) is an AI approach that combines knowledge graphs with retrieval-augmented generation to deliver rich, context- and relationship-aware answers from complex data. GraphRAG is a Microsoft Research - Project that has, after its initial publication, gained significant community support but has not, to date, been converted to a productized Like all methodologies, it's use should be purposeful. To determine if GraphRAG is right for your use case, consider your applications needs to consume RAG outcomes. Review the questions and Analysis Guide pictured below. Does your use case have a lot of duplicate information? Can questions be answered based on only some of the relevant knowledge? What is the scale of your solution? For a single question, would tens of knowledge chunks be relevant or tens of thousands of chunks? What is your use case tolerance for hallucinations (i.e., how critical is quality, which implies a necessity to retrieve all relevant knowledge)? Once you have determined your required RAG consumption pattern, you can more easily map methodologies to this. The above patterns are mapped below to sample AI methodologies. GraphRAG is highlighted as a solution when all relevant chunks might be too large for a single context window and also where all relevant or all chunks are required to be retrieved. For more information about GraphRAG and its use case appropriateness, see the below: Microsoft Research - Project GraphRAG Tech Community: The Future of AI: GraphRAG – A better way to query interlinked documents Tech Community: Unlocking Insights: GraphRAG & Standard RAG in Financial Services After GraphRAG selection There are different implementations and GitHub repositories available for GraphRAG concepts. Since Microsoft Research's inaugural publication in April 2024, different variations of the GraphRAG approach have been published. It is recommended to start your experimentations with the core GraphRAG GitHub Page and GraphRAG GitHub Repository. Once you've finished an initial, local proof of concept on a real-world use case and like your outcomes, you can move towards industrialization. See the GitHub Azure-Samples/graphrag-accelerator for a one-click deployment industrialization path. Standing up the most popular use case: The Research Assistant GraphRAG does particularly well as a research assistant with large amounts of data. It is able to analyze data, draw meaningful connections, and synthesize concepts and patterns into an insightful outcome. This section walks you through using graphrag python library with Azure OpenAI on top of a limited number of Wikipedia articles relating to financial auditing. The associated GitHub repository for this section is: adhazel/graphrag_demo. Running the Demo Complete the steps in each of the below locations, and, optionally, follow along in the video. GitHub Local Environment Setup Use Case A Research Assistant notebook After the Demo This video walk through is, by necessity, the short, happy path. Here are some ideas on what to check out next - be sure to watch the video for a full walkthrough of the GraphRAG Visualization Guide. Perfect the global, local, and drift searches Tune the prompts, paying special attention to extract graph, extract claims, and community report prompts Dig into and fine tune the GraphRAG settings (the yaml) Set up a visualization tool on top of your graph Explore a production use case! Many thanks for your attention and happy coding!1.1KViews0likes1CommentIntegrate Custom Azure AI Agents with CoPilot Studio and M365 CoPilot
Integrating Custom Agents with Copilot Studio and M365 Copilot In today's fast-paced digital world, integrating custom agents with Copilot Studio and M365 Copilot can significantly enhance your company's digital presence and extend your CoPilot platform to your enterprise applications and data. This blog will guide you through the integration steps of bringing your custom Azure AI Agent Service within an Azure Function App, into a Copilot Studio solution and publishing it to M365 and Teams Applications. When Might This Be Necessary: Integrating custom agents with Copilot Studio and M365 Copilot is necessary when you want to extend customization to automate tasks, streamline processes, and provide better user experience for your end-users. This integration is particularly useful for organizations looking to streamline their AI Platform, extend out-of-the-box functionality, and leverage existing enterprise data and applications to optimize their operations. Custom agents built on Azure allow you to achieve greater customization and flexibility than using Copilot Studio agents alone. What You Will Need: To get started, you will need the following: Azure AI Foundry Azure OpenAI Service Copilot Studio Developer License Microsoft Teams Enterprise License M365 Copilot License Steps to Integrate Custom Agents: Create a Project in Azure AI Foundry: Navigate to Azure AI Foundry and create a project. Select 'Agents' from the 'Build and Customize' menu pane on the left side of the screen and click the blue button to create a new agent. Customize Your Agent: Your agent will automatically be assigned an Agent ID. Give your agent a name and assign the model your agent will use. Customize your agent with instructions: Add your knowledge source: You can connect to Azure AI Search, load files directly to your agent, link to Microsoft Fabric, or connect to third-party sources like Tripadvisor. In our example, we are only testing the CoPilot integration steps of the AI Agent, so we did not build out additional options of providing grounding knowledge or function calling here. Test Your Agent: Once you have created your agent, test it in the playground. If you are happy with it, you are ready to call the agent in an Azure Function. Create and Publish an Azure Function: Use the sample function code from the GitHub repository to call the Azure AI Project and Agent. Publish your Azure Function to make it available for integration. azure-ai-foundry-agent/function_app.py at main · azure-data-ai-hub/azure-ai-foundry-agent Connect your AI Agent to your Function: update the "AIProjectConnString" value to include your Project connection string from the project overview page of in the AI Foundry. Role Based Access Controls: We have to add a role for the function app on OpenAI service. Role-based access control for Azure OpenAI - Azure AI services | Microsoft Learn Enable Managed Identity on the Function App Grant "Cognitive Services OpenAI Contributor" role to the System-assigned managed identity to the Function App in the Azure OpenAI resource Grant "Azure AI Developer" role to the System-assigned managed identity for your Function App in the Azure AI Project resource from the AI Foundry Build a Flow in Power Platform: Before you begin, make sure you are working in the same environment you will use to create your CoPilot Studio agent. To get started, navigate to the Power Platform (https://make.powerapps.com) to build out a flow that connects your Copilot Studio solution to your Azure Function App. When creating a new flow, select 'Build an instant cloud flow' and trigger the flow using 'Run a flow from Copilot'. Add an HTTP action to call the Function using the URL and pass the message prompt from the end user with your URL. The output of your function is plain text, so you can pass the response from your Azure AI Agent directly to your Copilot Studio solution. Create Your Copilot Studio Agent: Navigate to Microsoft Copilot Studio and select 'Agents', then 'New Agent'. Make sure you are in the same environment you used to create your cloud flow. Now select ‘Create’ button at the top of the screen From the top menu, navigate to ‘Topics’ and ‘System’. We will open up the ‘Conversation boosting’ topic. When you first open the Conversation boosting topic, you will see a template of connected nodes. Delete all but the initial ‘Trigger’ node. Now we will rebuild the conversation boosting agent to call the Flow you built in the previous step. Select 'Add an Action' and then select the option for existing Power Automate flow. Pass the response from your Custom Agent to the end user and end the current topic. My existing Cloud Flow: Add action to connect to existing Cloud Flow: When this menu pops up, you should see the option to Run the flow you created. Here, mine does not have a very unique name, but you see my flow 'Run a flow from Copilot' as a Basic action menu item. If you do not see your cloud flow here add the flow to the default solution in the environment. Go to Solutions > select the All pill > Default Solution > then add the Cloud Flow you created to the solution. Then go back to Copilot Studio, refresh and the flow will be listed there. Now complete building out the conversation boosting topic: Make Agent Available in M365 Copilot: Navigate to the 'Channels' menu and select 'Teams + Microsoft 365'. Be sure to select the box to 'Make agent available in M365 Copilot'. Save and re-publish your Copilot Agent. It may take up to 24 hours for the Copilot Agent to appear in M365 Teams agents list. Once it has loaded, select the 'Get Agents' option from the side menu of Copilot and pin your Copilot Studio Agent to your featured agent list Now, you can chat with your custom Azure AI Agent, directly from M365 Copilot! Conclusion: By following these steps, you can successfully integrate custom Azure AI Agents with Copilot Studio and M365 Copilot, enhancing you’re the utility of your existing platform and improving operational efficiency. This integration allows you to automate tasks, streamline processes, and provide better user experience for your end-users. Give it a try! Curious of how to bring custom models from your AI Foundry to your CoPilot Studio solutions? Check out this blog11KViews1like8CommentsThe Future of AI: Harnessing AI agents for Customer Engagements
Discover how AI-powered agents are revolutionizing customer engagement—enhancing real-time support, automating workflows, and empowering human professionals with intelligent orchestration. Explore the future of AI-driven service, including Customer Assist created with Azure AI Foundry.526Views2likes0CommentsThe Future of AI: Autonomous Agents for Identifying the Root Cause of Cloud Service Incidents
Discover how Microsoft is transforming cloud service incident management with autonomous AI agents. Learn how AI-enhanced troubleshooting guides and agentic workflows are reducing downtime and empowering on-call engineers.1.8KViews3likes1CommentThe Future of AI: Developing Code Assist – a Multi-Agent Tool
Discover how Code Assist, created with Azure AI Foundry Agent Service, uses AI agents to automate code documentation, generate business-ready slides, and detect security risks in large codebases—boosting developer productivity and project clarity.800Views2likes1CommentStart your Trustworthy AI Development with Safety Leaderboards in Azure AI Foundry
Selecting the right model for your AI application is more than a technical decision—it’s a foundational step in ensuring trust, compliance, and governance in AI. Today, we are excited to announce the public preview of safety leaderboards within Foundry model leaderboards, helping customers incorporate model safety as a first-class criterion alongside quality, cost, and throughput. This feature introduces three key components to support responsible AI development: A dedicated safety leaderboard highlighting the safest models; A quality–safety trade-off chart to balance performance and risk; Five new scenario-specific leaderboards supporting diverse responsible AI scenarios. Prioritize safety with the new leaderboard The safety leaderboard ranks the top models based on their robustness against generating harmful content. This is especially valuable in regulated or high-risk domains—such as healthcare, education, or financial services—where model outputs must meet high safety standards. To ensure benchmark rigor and relevance, we apply a structured filtering and validation process to select benchmarks. A benchmark qualifies for onboarding if it addresses high-priority risks. For safety and responsible AI leaderboards, we look at different benchmarks that can be considered reliable enough to provide some signals on the targeted areas of interest as they relate to safety. Our current safety leaderboard uses the HarmBench benchmark which includes prompts to illicit harmful behaviors from models. The benchmark covers 7 semantic categories of behaviors: Cybercrime & Unauthorized Intrusion Chemical & Biological Weapons/Drugs Copyright Violations Misinformation & Disinformation Harassment & Bullying Illegal Activities General Harm These 7 categories are organized into three broader functional groupings: Standard Harmful Behaviors Contextual Harmful Behaviors Copyright Violations Each grouping is featured in a separate responsible AI scenario leaderboard. We use the prompts evaluators from HarmBench to calculate Attack Success Rate (ASR) and aggregate them across the functional groupings to proxy model safety. Lower ASR values means that a model is more robust against attacks to illicit harmful content. We understand and acknowledge that model safety is a complex topic and has several dimensions. No single current open-source benchmark can test or represent the full spectrum of model safety in different scenarios. Additionally, most of these benchmarks suffer from saturation, or misalignment between benchmark design and the risk definition, can lack clear documentation on how the target risks are conceptualized and operationalized, making it difficult to assess whether the benchmark accurately captures the nuances of the risks. This can lead to either overestimating or underestimating model performance in real-world safety scenarios. While HarmBench dataset covers a limited set of harmful topics, it can still provide a high-level understanding of safety trends. Navigate trade-offs with the quality-safety chart Model selection often involves compromise across multiple criteria. Our new quality–safety trade-off chart helps you make informed decisions by comparing models based on their performance in safety and quality. You can: Identify the safest model measured by Attack Success Rate (lower is better) at a given level of quality performance; Or choose the highest-performing model in quality (higher is better) that still meets a defined safety threshold. Together with the quality-cost trade-off chart, you would be able to find the best trade-off between quality, safety, and cost in selecting a model: Scenario-based responsible AI leaderboards To support customers' diverse responsible AI scenarios, we have added 5 new leaderboards to rank the top models in safety and broader responsibility AI scenarios. Each leaderboard is powered by industry-standard public benchmarks covering: Model robustness against harmful behaviors using HarmBench in 3 scenarios, targeting standard harmful behaviors, contextually harmful behaviors, and copyright violations: Consistent with the safety leaderboard, lower ASR scores for a model mean better robustness against generating harmful content. Model ability to detect toxic content using the Toxigen benchmark: This benchmark targets adversarial and implicit hate speech detection. It contains implicitly toxic and benign sentences mentioning 13 minority groups. Higher accuracy based on F1-score for a model means its better ability to detect toxic content. Model knowledge of sensitive domains including cybersecurity, biosecurity, and chemical security, using the Weapons of Mass Destruction Proxy benchmark (WMDP): A higher accuracy score for a model denotes more knowledge of dangerous capabilities. These scenario leaderboards allow developers, compliance teams, and AI governance stakeholders to align model selection with organizational risk tolerance and regulatory expectations. Building Trustworthy AI Starts with the Right Tools With safety leaderboards now available in public preview, Foundry model leaderboards offer a unified, transparent, and data-driven foundation for selecting models that align with your safety requirements. This addition empowers teams to move from ad hoc evaluation to principled model selection—anchored in industry-standard benchmarks and responsible AI practices. To learn more, explore the methodology documentation and start building AI solutions you—and your stakeholders—can trust.1.1KViews2likes0CommentsThe Future of AI: How Lovable.dev and Azure OpenAI Accelerate Apps that Change Lives
Discover how Charles Elwood, a Microsoft AI MVP and TEDx Speaker, leverages Lovable.dev and Azure OpenAI to create impactful AI solutions. From automating expense reports to restoring voices, translating gestures to speech, and visualizing public health data, Charles's innovations are transforming lives and democratizing technology. Follow his journey to learn more about AI for good.813Views2likes0CommentsTransforming Customer Support with Azure OpenAI, Azure AI Services, and Voice AI Agents
Customer support today is under immense pressure to meet the rising expectations of speed, personalization, and always-on availability. Yet, businesses still struggle with 1. Long wait times and call center 2. queues 3. Disconnected support channels 4. Limited availability of agents outside business hours 5. Repetitive issues consuming valuable human time 6. Frustrated users due to lack of immediate and contextual answers These inefficiencies are costing businesses over $3.7 trillion annually in poor service delivery, while over 70% of agents (based on the research) spend excessive time searching for the right answers instead of resolving problems directly How Voice AI Agents Are Transforming the Support Experience Enter the era of voice-enabled AI agents—powered by Azure OpenAI, Azure AI Services, and ServiceNow—designed to completely transform the way customers engage with support systems. These agents can now: Handle complex user queries in natural language Access enterprise systems (like CRM, ITSM, HR) in real-time Automate repetitive tasks such as password resets, ticket status updates, or return tracking Escalate only when human assistance is truly needed Create connected, seamless, and intelligent support experiences across departments Let’s take a closer look at four architecture patterns that showcase how enterprises can deploy these agents effectively. 🔷 Architecture Pattern 1: Unified Voice Agent with Azure AI + ServiceNow + CRM Integration In this architecture, the customer support journey begins when a user initiates a voice-based conversation through a front-end interface such as a web application, mobile app, or smart device. The captured audio is streamed directly to Azure OpenAI GPT-4o's real-time API, which performs immediate speech-to-text transcription, interprets the intent behind the request, and prepares the initial system response—all in a single seamless stream. Once the user’s intent is understood (e.g., "create a ticket", "check incident status", or "list recent issues"), GPT-4o passes control to Semantic Kernel, which orchestrates the next steps through function calling. Semantic Kernel hosts pre-defined tools (functions) that map to ServiceNow API actions, such as createIncident, getIncidentStatus, listIncidents, or searchKnowledgeBase. These function calls are then securely routed to ServiceNow via REST APIs. ServiceNow executes the appropriate actions—whether it's creating a new support ticket, retrieving the status of an open incident, or searching its Knowledge Base. CRM data is also seamlessly accessed, if needed, to enrich responses with personalized context such as customer history or case metadata. The result from ServiceNow (e.g., an incident ID or KB article summary) is then sent back to Azure GPT-4o, which converts the structured data into a natural spoken response. This final audio output is delivered to the user in real time, completing the end-to-end conversational loop. Additionally, tools like Azure Monitor or Application Insights can be integrated to log telemetry, track usage trends, monitor latency, and analyze user satisfaction over time. This architecture enables organizations to streamline customer support operations, reduce wait times, and deliver natural, intelligent assistance across any channel—voice-first. 🔷 Architecture Pattern 2: Scalable Customer Support with Multi-Agent Voice Architecture This architecture introduces a modular and distributed agent-based design to deliver intelligent, scalable customer support through a voice interface. The process starts with the User Proxy Agent, which acts as the entry point for all user conversations. It captures voice input and forwards the request to the Master Agent, which serves as the brain of the architecture. The Master Agent, empowered with a large language model (LLM) and memory, interprets the intent behind the user's input and dynamically routes the request to the most appropriate domain-specific agent. These include specialized agents such as the Activation Agent, Root Agent, Sales Agent, or Technical Agent, each designed to handle specific workflows or business tasks. The Activation Agent connects to web services and handles provisioning or onboarding scenarios. The Root Agent taps into document search systems (like Azure Cognitive Search) to answer questions grounded in internal documentation. The Sales Agent is equipped with structured logic models (SLMs) and CRM access to retrieve sales-related data from backend databases. The Technical Agent is containerized via Docker and built to manage backend diagnostics, code-level issues, or infrastructure status—often connecting to systems like ServiceNow for real-time ITSM execution. Once the task is executed by the respective agent, results are passed back through the Master Agent and ultimately to the User Proxy Agent, which synthesizes the output into a voice response and delivers it to the user. The presence of shared memory between agents allows for maintaining context across multi-turn conversations, enabling complex, multi-step interactions (e.g., “Create a ticket, check the latest order status, and escalate it if unresolved.”) without breaking continuity. This architecture is ideal for enterprises looking to scale customer support horizontally, adding new agents without disrupting existing workflows. It enables parallelism, specialization, and real-time orchestration, providing faster resolutions while reducing the burden on human agents. Best suited for distributed support operations across IT, HR, sales, and field support—where task-specific intelligence and modular scale are critical. 🔷 Architecture Pattern 3: Customer Support Reinvented with Voice RAG + Azure AI + ServiceNow This architecture brings a cutting-edge twist to Retrieval-Augmented Generation (RAG) by enabling it through a Voice AI agent—creating a truly conversational experience grounded in enterprise knowledge. By combining Azure OpenAI models with the ServiceNow Knowledge Base, this pattern ensures accurate, voice-driven support for employees or customers in real time. The process begins when a user interacts with a voice-enabled interface—via phone, web, or embedded assistant. The Voice AI agent streams the audio to Azure OpenAI GPT-4o, which transcribes the voice input, understands the intent, and then triggers a RAG pipeline. Instead of relying solely on the model’s internal memory, the system performs a real-time query against the ServiceNow Product Knowledge Base, retrieving relevant knowledge articles, troubleshooting guides, or support workflows. These results are embedded directly into the prompt, creating an enriched context that is passed to the language model via Azure AI Foundry. The model then generates a natural, contextually accurate spoken response, which is converted back into audio and voiced to the user—creating a seamless end-to-end Voice RAG experience. This approach ensures that responses are not only conversational but also deeply grounded in trusted enterprise knowledge. Ideal for helpdesk automation, HR support, and IT troubleshooting—where users prefer speaking naturally and need verified, document-backed responses in real time. 🔷 Architecture Pattern 4: Conversational Customer Support with AI Avatars and Azure AI This architecture delivers rich, conversational experiences by integrating AI avatars, Azure AI, and ServiceNow to offer human-like, intelligent customer support across channels. It merges natural speech, facial expression, and enterprise data to create a highly engaging support assistant. The interaction begins when a user speaks with an AI avatar application, whether embedded in a web portal, mobile device, or kiosk. The voice is captured and processed through a speech-to-text pipeline, which feeds the Avatar Module and Live Discussions Engine to manage lip-sync, emotional tone, and turn-taking. Behind the scenes, the avatar is connected to Azure AI services, including Custom Neural Voice (CNV) and Azure OpenAI, which enable the avatar to understand intent and generate responses in natural, conversational language. Most critically, the system integrates directly with the ServiceNow platform. Through secure APIs, the avatar queries ServiceNow to: Retrieve case status updates Provide summaries of incident history Look up Knowledge Base articles Trigger incident creation if needed These ServiceNow results are then passed through the text-to-speech module, with support for multilingual voice synthesis, and rendered by the avatar using expressive animation. Responses are visually delivered as live or pre-rendered avatar videos, creating a truly interactive and personalized experience. This pattern not only answers basic questions but also surfaces dynamic enterprise data—turning the AI avatar into a frontline voice agent capable of real-time, connected support across IT, HR, or customer service domains. Best for branded digital experiences, frontline support stations, or HR/IT helpdesk automation where facial presence, empathy, and backend integration are essential. ✨ Closing Thoughts: The Future of Customer Support Is Here Customer expectations have evolved—and so must the way we deliver support. By combining the power of Azure OpenAI, Azure AI Services, and ServiceNow, we’re not just automating tasks—we’re reinventing how organizations connect with their users. Whether it's: A unified voice agent handling IT tickets and CRM queries, A multi-agent architecture scaling across departments, A voice-enabled RAG system delivering knowledge-grounded answers in real time, or A human-like AI avatar offering face-to-face support— These architectures are driving a new era of intelligent, conversational, and scalable customer service. 👉 Join us at the Microsoft Booth during ServiceNow Knowledge 2025 (starting May 6th) to experience these solutions live, explore the tech behind them, and imagine how they can transform your business. Let’s build the future of support—together.1.2KViews1like1Comment