microsoft information protection
784 TopicsSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Coplot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurUsing Copilot in Fabric with Confidence: Data Security, Compliance & Governance with DSPM for AI
Introduction As organizations embrace AI to drive innovation and productivity, ensuring data security, compliance, and governance becomes paramount. Copilot in Microsoft Fabric offers powerful AI-driven insights. But without proper oversight, users can misuse copilot to expose sensitive data or violate regulatory requirements. Enter Microsoft Purview’s Data Security Posture Management (DSPM) for AI—a unified solution that empowers enterprises to monitor, protect, and govern AI interactions across Microsoft and third-party platforms. We are excited to announce the general availability of Microsoft Purview capabilities for Copilot in Fabric, starting with Copilot in Power BI. This blog explores how Purview DSPM for AI integrates with Copilot in Fabric to deliver robust data protection and governance and provides a step-by-step guide to enable this integration. Capabilities of Purview DSPM for AI As organizations adopt AI, implementing data controls and Zero Trust approach is crucial to mitigate risks like data oversharing and leakage, and potential non-compliant usage in AI. We are excited to announce Microsoft Purview capabilities for Copilot in Fabric, starting with Copilot for Power BI, By combining Microsoft Purview and Copilot for Power BI, users can: Discover data risks such as sensitive data in user prompts and responses in Activity Explorer and receive recommended actions in their Microsoft Purview DSPM for AI Reports to reduce these risks. DSPM for AI Activity Explorer DSPM for AI Reports If you find Copilot in Fabric actions in DSPM for AI Activity Explorer or reports to be potentially inappropriate or malicious, you can look for further information in Insider Risk Management (IRM), through an eDiscovery case, Communication Compliance (CC), or Data Lifecycle Management (DLM). Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI. Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant or unethical AI usage detection with Purview Communication Compliance. Purview Audit provides a detailed log of user and admin activity within Copilot in Fabric, enabling organizations to track access, monitor usage patterns, and support forensic investigations. Purview eDiscovery enables legal and investigative teams to identify, collect, and review Copilot in Fabric interactions as part of case workflows, supporting defensible investigations Communication Compliance helps detect potential policy violations or risky behavior in administrator interactions, enabling proactive monitoring and remediation for Copilot in Fabric Data Lifecycle Management allows teams to automate the retention, deletion, and classification of Copilot in Fabric data—reducing storage costs and minimizing risk from outdated or unnecessary information Steps to Enable the Integration To use DSPM for AI from the Microsoft Purview portal, you must have the following prerequisites, Activate Purview Audit which requires user to have the role of Entra Compliance Admin or Entra Global admin to enable Purview Audit. More details on DSPM pre-requisites can be found here, Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn To enable Purview DSPM for AI for Copilot for Power BI, Step 1: Enable DSPM for AI Policies Navigate to Microsoft Purview DSPM for AI. Enable the one-click policy: “DSPM for AI – Capture interactions for Copilot experiences”. Optionally enable additional policies: Detect risky AI usage Detect unethical behavior in AI apps These policies can be configured in the Microsoft Purview DSPM for AI portal and tailored to your organization’s risk profile. Step 2: Monitor and Act Use DSPM for AI Reports and Activity Explorer to monitor AI interactions. Apply IRM, DLM, CC and eDiscovery actions as needed. Purview Roles and Permissions Needed by Users To manage and operate DSPM for AI effectively, assign the following roles: Role Responsibilities Purview Compliance Administrator Full access to configure policies and DSPM for AI setup Purview Security Reader View reports, dashboards, policies and AI Activity Content Explorer Content Viewer Additional Permission to view the actual prompts and responses on top of the above permissions More details on Purview DSPM for AI Roles & permissions can be found here, Permissions for Microsoft Purview Data Security Posture Management for AI | Microsoft Learn Purview Costs Microsoft Purview now offers a combination of entitlement-based (per-user-per-month) and Pay-As-You-Go (PAYG) pricing models. The PAYG model applies to a broader set of Purview capabilities—including Insider Risk Management, Communication Compliance, eDiscovery, and other data security and governance solutions—based on copilot for Power BI usage volume or complexity. Purview Audit logging of Copilot for Power BI activity remains included at no additional cost as part of Microsoft 365 E5 licensing. This flexible pricing structure ensures that organizations only pay for what they use as data flows through AI models, networks, and applications. For further details, please refer to this blog: New Purview pricing options for protecting AI apps and agents | Microsoft Community Hub Conclusion Microsoft Purview DSPM for AI is a game-changer for organizations looking to adopt AI responsibly. By integrating with Copilot in Fabric, it provides a comprehensive framework to discover, protect, and govern AI interactions—ensuring compliance, reducing risk, and enabling secure innovation. Whether you're a Fabric Admin, data privacy officer, compliance admin or security admin, enabling this integration is a strategic step toward building a secure, AI-ready enterprise. Additional resources Use Microsoft Purview to manage data security & compliance for Microsoft Copilot in Fabric | Microsoft Learn How to deploy Microsoft Purview DSPM for AI to secure your AI apps Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn Learn about Microsoft Purview billing models | Microsoft LearnEmpowering Secure AI Innovation: Data Security and Compliance for AI Agents
As organizations embrace the transformative power of generative AI, agentic AI is quickly becoming a core part of enterprise innovation. Whether organizations are just beginning their AI journey or scaling advanced solutions, one thing is clear: agents are poised to transform every function and workflow across organizations. IDC predicts that over 1 billion new business process agents will be created in the next four years 1 . This surge in AI adoption is empowering employees across roles – from low-code makers to pro-code developers – to build and use AI in new ways. Business leaders are eager to support this momentum, but they also recognize the need to innovate responsibly with AI. Microsoft Purview’s evolution When Microsoft 365 Copilot launched in November 2022, it sparked a wave of excitement and an immediate question: how do we secure and govern the data powering these AI experiences? Microsoft Purview quickly evolved to meet this need, extending its data security and compliance capabilities to the Microsoft 365 Copilot ecosystem. It delivered discoverability, protection, and governance value that helped customers discover data risks such as data oversharing, protect sensitive data to prevent data loss and insider risks, and govern AI usage to meet regulations and policies. Now, as customers move beyond pre-built agents like Copilot to develop their own AI agents and applications, Microsoft Purview has evolved to extend the same data protections built for Microsoft 365 Copilot to AI agents. Today, those protections span the entire development spectrum—from no-code and low-code tools like Copilot Studio to pro-code environments such as Azure AI Foundry. Microsoft Purview helps address challenges across the development spectrum Makers – typically business users or citizen developers who build solutions using low-code or no-code tools – shouldn’t need to become security experts to build AI responsibly. Yet, without proper safeguards, these agents can inadvertently expose sensitive data or violate compliance policies. That is why with Microsoft Purview, security and IT teams can feel confident about the agents being built in their organizations. When makers build agents through the Agent Builder or directly in Copilot Studio, security admins can set up Microsoft Purview’s data security and compliance controls that work behind the scenes to support makers in building secure and compliant agents. These controls automatically enforce policies, monitor data access, and ensure compliance without requiring the maker to become a security expert without requiring makers to take additional actions. In fact, a recent Microsoft study found that 71% of developer decision-makers acknowledge that these constraints result in security trade-offs and development delays 2 . Pro-code developers are under increasing pressure to deliver fast, flexible, and seamlessly integrated solutions, yet data security often becomes a deployment blocker or an afterthought. Building enterprise-grade data security and compliance capabilities from scratch is not only time-consuming but also requires deep domain expertise. This is where Microsoft Purview steps in. As an industry leader in data security and compliance, Purview does the heavy lifting, so developers don’t have to. Now in preview, Purview SDK can be used by developers to embed robust, enterprise-ready data protections directly into their AI applications, instead of building complex security frameworks on their own. The Purview SDK is a comprehensive set of REST APIs, documentation, and code samples, allowing developers to easily incorporate Microsoft Purview’s capabilities into their workflows—regardless of their integrated development environment (IDE). This empowers them to move fast without compromising on security or compliance and at the same time, Microsoft Purview helps security teams remain in control. : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime Startups, ISVs, and partners can leverage the Purview SDK to seamlessly integrate Purview’s industry-leading features into their AI agents and applications. This enables their offerings to become Purview-aware, empowering customers to more easily secure and govern data within their AI environments. For example, Christian Veillette, Chief Technology Officer at Arthur Health, a Quisitive customer, states “The synergistic integration of MazikCare, the Quisitive Intelligence Platform, and the data compliance power of Purview SDK, including its DSPM for AI, forms a foundational pillar for trustworthy and safe AI-driven healthcare transformations. This powerful combination ensures continuous oversight and instant enforcement of compliance policies, giving IT leadership full assurance in the output of every AI model and upholding the highest safety standards. By centralizing policy enforcement, security concerns are significantly eased, empowering leadership to confidently steer their organizations through the AI transformation journey.” Microsoft partner, Infotechtion, has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. Vivek Bhatt, Infotechtion’s Chief Technology Officer says, “Embedding Purview SDK into Infotechtion's AI governance solution improved trust and security by aligning Gen-AI interactions with Microsoft Purview's enterprise policies.” Microsoft Purview also natively integrates with Azure AI Foundry, enabling seamless, built-in security and compliance for AI workloads without requiring additional development effort. With this integration, signals from Azure AI Foundry are automatically surfaced in Microsoft Purview’s Data Security Posture Management (DSPM) for AI, Insider Risk Management, and compliance solutions. This means security teams can monitor AI usage, detect data risks, and enforce compliance policies across AI agents and applications—whether they’re built in-house or with Azure AI Foundry models. This reinforces Microsoft’s commitment to delivering secure-by-default AI innovation—empowering organizations to scale responsibly with confidence. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI. Explore more partner case studies from Ernst & Young and Infosys to see how they’re leveraging Purview SDK. Learn more about Purview SDK and Microsoft Purview for Azure AI Foundry. Unified visibility and control Whether supporting pro-code developers or low-code makers, Microsoft Purview enables organizations to secure and govern AI across organizations. With Purview, security teams can discover data security risks, protect sensitive data against data leakage and insider risks, and govern AI interactions. Discover data security risks With Data Security Posture Management (DSPM) for AI, data security teams can discover detailed data risk insights in AI interactions across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents. Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents all in Microsoft Purview DSPM for AI. Protect sensitive data against data leaks and insider risks In DSPM for AI, data security admins can also get recommended insights to improve their organization’s security posture like minimizing risks of data oversharing. For example, an admin might get a recommendation to set up a data loss prevention (DLP) policy that prevents agents in Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. By setting up this policy, organizations can prevent confidential legal documents—with specific language that could lead to improper guidance—from being summarized. It also ensures that “Internal only” documents aren’t used to create content that might be shared outside the organization. Extend data loss prevention (DLP) policies to agents in Microsoft 365 to protect sensitive data. Agents often pull data from sources like SharePoint and Dataverse, and Microsoft Purview helps protect that data every step of the way. It honors sensitivity labels, enforces access permissions, and applies label inheritance so that AI-generated content carries the same protections as its source. With auto-labeling in Dataverse, sensitive data is classified as soon as it’s ingested—reducing manual effort and maintaining consistent protection. When responses draw from multiple sources with different labels, the most restrictive label is applied to uphold compliance and minimize risk. : Sensitivity labels will be automatically applied to data in Dataverse. : AI-generated responses will inherit and honor the source data’s sensitivity labels. In addition to data and permission controls that help address data oversharing or leakage, security teams also need ways to detect users' risky activities in AI apps and agents that could potentially lead to data security incidents. With risky AI usage indicators, policy template, and analytics report in Microsoft Purview Insider Risk Management, security teams with appropriate permissions can detect risky activities. For example, there could be a departing employee receiving an unusual number of AI responses across Copilots and agents containing sensitive data, deviating from their past activity patterns. Security teams can then effectively detect and respond to these potential incidents to minimize the negative impact. For example, they can configure Adaptive Protection to automatically block a high-risk user from accessing sensitive data. An Insider Risk Management alert from a Risky AI usage policy shows a user with anomalous activities. Govern AI Interactions to detect non-compliant usage Microsoft Purview provides a comprehensive set of tools to govern AI usage and detect non-compliant user activities. AI interactions across Microsoft Copilots, AI apps and agents, are recorded in Audit logs. eDiscovery enables legal and compliance teams with appropriate permissions to collect and review AI-generated content for internal investigations or litigation. Data Lifecycle Management enables teams to set policies to retain or dispose of AI interactions, while Communication Compliance helps detect risky or inappropriate use of AI, such as harmful content or other violations against code-of-conduct policies. Together, these capabilities give organizations the visibility and control they need to innovate responsibly with AI. AI interactions across Microsoft Copilots, AI apps and agents are recorded in Audit logs. AI interactions across Microsoft Copilots, AI apps and agents can be collected and reviewed in eDiscovery. Microsoft Purview Communication Compliance can detect non-compliant content in AI prompts across Microsoft Copilots, AI apps and agents. Securing the Future of AI Innovation — Explore Additional Resources As organizations accelerate their adoption of agentic AI, the need for built-in security and compliance has never been more critical. Microsoft Purview empowers both makers and developers to innovate with confidence—ensuring that every AI interaction is secure, compliant, and aligned with enterprise standards. By embedding protection across the entire development lifecycle, Purview helps organizations unlock the full potential of AI while maintaining the trust, transparency, and control that responsible innovation demands. To dive deeper into how Microsoft Purview supports secure AI development, explore our additional resources, documentation, and integration guides: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Learn more about Purview pricing Get started with Azure AI Foundry Get started with Microsoft Purview 1 IDC, 1 Billion New Logical Applications: More Background, Gary Chen, Jim Mercer, April 2024 https://blogs.idc.com/2025/04/04/the-agentic-evolution-of-enterprise-applications/ 2 Microsoft, AI App Security Quantitative Study, April 20251.7KViews0likes0CommentsMicrosoft Purview Powering Data Security and Compliance for Security Copilot
Microsoft Purview provides Security and Compliance teams with extensive visibility into admin actions within Security Copilot. It offers tools for enriched users and data insights to identify, review, and manage Security Copilot interaction data in DSPM for AI. Data security and compliance administrators can also utilize Purview’s capabilities for data lifecycle management and information protection, advanced retention, eDiscovery, and more. These features support detailed investigations into logs to demonstrate compliance within the Copilot tenant. Prerequisites Please refer to the prerequisites for Security Copilot and DSPM for AI in the Microsoft Learn Docs. Key Capabilities and Features Heightened Context and Clarity As organizations adopt AI, implementing data controls and a Zero Trust approach is essential to mitigate risks like data oversharing, leakage, and non-compliant usage. Microsoft Purview, combined with Data Security Posture Management (DSPM) for AI, empowers security and compliance teams to manage these risks across Security Copilot interactions. With this integration, organizations can: Discover data risks by identifying sensitive information in user prompts and responses. Microsoft Purview surfaces these insights in the DSPM for AI dashboard and recommends actions to reduce exposure. Identify risky AI usage using Microsoft Purview Insider Risk Management to investigate behaviors such as inadvertent sharing of sensitive data or to detect suspicious activity within Security Copilot usage. These capabilities provide heightened visibility into how AI is used across the organization, helping teams proactively address potential risks before they escalate. Compliance and Governance Building on this visibility, organizations can take action using Microsoft Purview’s integrated compliance and governance solutions. Here are some examples of how teams are leveraging these capabilities to govern Security Copilot interactions: Audit provides a detailed log of user and admin activity within Security Copilot, enabling organizations to track access, monitor usage patterns, and support forensic investigations. eDiscovery enables legal and investigative teams to identify, collect, and review Security Copilot interactions as part of case workflows, supporting defensible investigations. Communication Compliance helps detect potential policy violations or risky behavior in administrator interactions, enabling proactive monitoring and remediation. Data Lifecycle Management allows teams to automate the retention, deletion, and classification of Security Copilot data—reducing storage costs and minimizing risk from outdated or unnecessary information. Together, these tools provide a comprehensive governance framework that supports secure, compliant, and responsible AI adoption across the enterprise. Getting Started Enable Purview Audit for Security Copilot Sign into your Copilot tenant at https://securitycopilot.microsoft.com/, and with the Security Administrator permissions, navigate to the Security Copilot owner settings and ensure Audit logging is enabled. Microsoft Purview To start using DSPM for AI and the Microsoft Purview capabilities, please complete the following steps to get set up and then feel free to experiment yourself. Navigate to Purview (Purview.Microsoft.com) and ensure you have adequate permissions to access the different Purview solutions as described here. DSPM for AI Select the DSPM for AI “Solution” option on the left-most navigation. Go to the policies or recommendations tab turn on the following: a. “DSPM for AI – Capture interactions for Copilot Experiences”: Captures prompts and responses for data security posture and regulatory compliance from Security Copilot and other Copilot experiences. b. “Detect Risky AI Usage”: Helps to calculate user risk by detecting risky prompts and responses in Copilot experiences. c. “Detect unethical behavior in AI apps”: Detects sensitive info and inappropriate use of AI in prompts and responses in Copilot experiences. To begin reviewing Security Copilot usage within your organization and identifying interactions that contain sensitive information, select Reports from the left navigation panel. a. The "Sensitive interactions per AI app" report shows the most common sensitive information types used in Security Copilot interactions and their frequency. For instance, this tenant has a significant amount of IT and IP Address information within these interactions. Therefore, it is important to ensure that all sensitive information used in Security Copilot interactions is utilized for legitimate workplace purposes and does not involve any malicious or non-compliant use of Security Copilot. b. “Top unethical AI interactions” will show an overview of any potentially unsafe or inappropriate interactions with AI apps. In this case, Security Copilot only has seven potentially unsafe interactions that included unauthorized disclosure and regulatory collusion. c. “Insider risk severity per AI app” shows the number of high risk, medium risk, low risk and no risk users that are interacting with Security Copilot. In this tenant, there are about 1.9K Security Copilot users, but very few of them have an insider risk concern. d. To check the interaction details of this potentially risky activity, head over to Activity Explorer for more information. 5. In Activity Explorer, you should filter the App to Security Copilot. You will also have the option to filter based on the user risk level and sensitive information type. To identify the highest risk behaviors, filter for users with a medium to high risk level or those associated with the most sensitive information types. a. Once you have filtered, you can start looking through the activity details for more information like the user details, the sensitive information types, the prompt and response data, and more. b. Based on the details shown, you may decide to investigate the activity and the user further. To do so, we have data security investigation and governance tools. Data Security Investigations and Governance If you find Security Copilot actions in DSPM for AI Activity Explorer to be potentially inappropriate or malicious, you can look for further information in Insider Risk Management (IRM), through an eDiscovery case, Communication Compliance (CC), or Data Lifecycle Management (DLM). Insider Risk Management By enabling the quick policy in DSPM for AI to monitor risky Copilot usage, alerts will start appearing in IRM. Customize this policy based on your organization's risk tolerance by adjusting triggering events, thresholds, and indicators for detected activity. Examine the alerts associated with the "DSPM for AI – Detect risky AI usage" policy, potentially sorting them by severity from high to low. For these alerts, you will find a User Activity scatter plot that provides insights into the activities preceding and following the user's engagement with a risky prompt in Security Copilot. This assists the Data Security administrator in understanding the necessary triage actions for this user/alert. After thoroughly investigating these details and determining whether the activity was malicious or an inadvertent insider risk, appropriate actions can be taken, including issuing a user warning, resolving the case, sharing the case with an email recipient, or escalating the case to eDiscovery for further investigation. eDiscovery To identify, review and manage your Security Copilot logs to support your investigations, use the eDiscovery tool. Here are the steps to take in eDiscovery: a. Create an eDiscovery Case b. Create a new search c. In Search, go to condition builder and select Add conditions -> KeyQL d. Enter the query as: - KQL Equal (ItemClass=IPM.SkypeTeams.Message.Copilot.Security.SecurityCopilot) e. Run the query f. Once completed, add the search to a review set (Button at the top) g. In the review set, view details of the Security Copilot conversation Communication Compliance In Communication Compliance, like IRM, you can investigate details around the Security Copilot interactions. Specifically, in CC, you can determine if these interactions contained non-compliant usage of Security Copilot or inappropriate text. After identifying the sentiment of the Security Copilot communication, you can take action by resolving the alert, sending a warning notice to the user, escalating the alert to a reviewer, or escalating the alert for investigation, which will create a new eDiscovery case. Data Lifecycle Management For regulatory compliance or investigation purposes, navigate to Data Lifecycle Management to create a new retention policy for Security Copilot activities. a. Provide a friendly name for the retention policy and select Next b. Skip Policy Scope section for this validation c. Select “Static” type of retention policy and select Next d. Choose “Microsoft Copilot Experiences” to apply retention policy to Security Copilot interactions Billing Model Microsoft Purview audit logging of Security Copilot activity remains included at no additional cost as part of Microsoft 365 E5 licensing. However, Microsoft Purview now offers a combination of entitlement-based (per-user-per-month) and Pay-As-You-Go (PAYG) pricing models. The PAYG model applies to a broader set of Purview capabilities—including Insider Risk Management, Communication Compliance, eDiscovery, and other data security and governance solutions—based on usage volume or complexity. This flexible pricing structure ensures that organizations only pay for what they use as data flows through AI models, networks, and applications. For further details, please refer to this Microsoft Security Community Blog: New Purview pricing options for protecting AI apps and agents | Microsoft Community Hub Looking Ahead By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their Security Copilot interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. Please reach out to us if you have any questions or additional requirements. Additional Resources Use Microsoft Purview to manage data security & compliance for Microsoft Security Copilot | Microsoft Learn How to deploy Microsoft Purview DSPM for AI to secure your AI apps Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn Learn about Microsoft Purview billing models | Microsoft LearnModern, unified data security in the AI era: New capabilities in Microsoft Purview
AI is transforming how organizations work—but it’s also changing how data moves, who can access it, and how easily it can be exposed. Sensitive data now appears in AI prompts, Copilot responses, and across a growing ecosystem of SaaS and GenAI tools. To keep up, organizations need data security that’s built for how people work with AI today. Microsoft Purview brings together native classification, visibility, protection and automated workflows across your data estate—all in one integrated platform. Today, we’re highlighting some of our new capabilities that help you: Uncover data blind spots: Discover hidden risks and improve data security posture and find sensitive data on endpoints with on-demand classification Strengthen protection across data flows: Enhance oversharing controls for Microsoft 365 Copilot, expand protection to more Azure data sources, and extend data security to the network layer Respond faster with automation: Automate investigation workflows with Alert agents in Data Loss Prevention (DLP) and Insider Risk Management (IRM) Discover hidden risks and improve data security posture Many security teams struggle with fragmented tools that siloes sensitive data visibility across apps and clouds. According to recent studies, 21% of decision-makers cite the lack of unified visibility as a top barrier to effective data security. This leads to gaps in protection and inefficient incident response—ultimately weakening the organization’s overall data security posture. To help organizations address these challenges, last November at Ignite we launched Microsoft Purview Data Security Posture Management (DSPM), and we’re excited to share that this capability is now available. DSPM continuously assesses your data estate, surfaces contextual insights into sensitive data and its usage, and recommends targeted controls to reduce risk and strengthen your data security program. We’re also bringing in new signals from email exfiltration and from user activity in the browser and network into DSPM’s insights and policy recommendations, making sure organizations can improve their protections and address potential data security gaps. You can now also experience deeper investigations into DSPM with 3x more suggested prompts, outcome-based promptbooks and new guidance experience that helps interpret unsupported user queries and offers helpful alternatives, increasing usability without hard stops. New Security Copilot task-based promptbooks in Purview DSPM Learn more about how DSPM can help your organization strengthen your data security posture. Find sensitive data on endpoints with on-demand classification Security teams often struggle to uncover sensitive data sitting for a long time on endpoints, one of the most overlooked and unmanaged surfaces in the data estate. Typically, data gets classified when a file is created, modified, or accessed. As a result, older data at rest that hasn’t been touched in a while can remain outside the scope of classification workflows. This lack of visibility increases the risk of exposure, especially for sensitive data that is not actively used or monitored. To tackle this challenge, we are introducing on-demand classification for endpoints. Coming to public preview in July, on-demand classification for endpoints gives security teams a targeted way to scan data at rest on Windows devices, without relying on file activity, to uncover sensitive files that have never been classified or reviewed. This means you can: Discover sensitive data on endpoints, including older, unclassified data that may never have been scanned, giving admins visibility into unclassified files that typically fall outside traditional classification workflows Support audit and compliance efforts by identifying sensitive data Focus scans on specific users, file types, or timelines to get visibility that really matters Get insights needed to prioritize remediation or protection strategies Security teams can define where or what to focus on by selecting specific users, file types, or last modified dates. This allows teams to prioritize scans for high-priority scenarios, like users handling sensitive data. Because on-demand classification scans are manually triggered and scoped without complex configuration, organizations can get targeted visibility into sensitive data on endpoints with minimal performance impact and without the need for complex setup. Complements just-in-time protection On-demand classification for endpoints also works hand-in-hand with existing endpoint DLP capabilities like just-in-time (JIT) protection. JIT protection kicks in during file access, blocking or alerting based on real-time content evaluation On-demand classification works ahead of time, identifying sensitive data that hasn’t been modified or accessed in an extended period Used together, they form a layered endpoint protection strategy, ensuring full visibility and protection. Choosing the right tool On-demand classification for endpoints is purpose-built for discovering sensitive data at rest on endpoints, especially files that haven’t been accessed or modified for a long time. It gives admins targeted visibility—no user action required. If you’re looking to apply labels, enforce protection policies, or scan files stored on on-premises servers, the Microsoft Purview Information Protection Scanner may be a better fit. It is designed for ongoing policy enforcement and label application across your hybrid environment. Learn more here. Get started with on-demand classification On-demand classification is easy to set up, with no agents to install or complex rules to configure. It only runs when you choose, rather than continuously running in the background. You stay in control of when and where scans happen, making it a simple and efficient way to extend visibility to endpoints. On-demand classification for endpoints enters public preview in July. Stay tuned for setup guidance and more details as we get closer to launch. Streamlining technical issue resolution with always-on diagnostics for endpoint devices Historically, resolving technical support tickets for Purview DLP required admins to manually collect logs and have end users reproduce the original issue at the time of the request. This could lead to delays, extended resolution times, and repeated communication cycles, especially for non-reproducible issues. Today, we’re introducing a new way to capture and share endpoint diagnostics: Always-on diagnostics available in public preview. When submitting support requests for Purview endpoint DLP, customers can now share rich diagnostic data with Microsoft without needing to recreate the issue scenario again at the time of submitting an investigation request such as a support ticket. This capability can now be enabled through your endpoint DLP settings. Learn more about always-on diagnostics here. Strengthening DLP for Microsoft 365 Copilot As organizations adopt Microsoft 365 Copilot, DLP plays a critical role in minimizing the risk of sensitive data exposure through AI. New enhancements give security teams greater control, visibility, and flexibility when protecting sensitive content in Copilot scenarios. Expanded protection to labeled emails DLP for Microsoft 365 Copilot now supports labeled email, available today, in addition to files in SharePoint and OneDrive. This helps prevent sensitive emails from being processed by Copilot and used as grounding data. This capability is applicable to emails sent after 1/1/2025. Alerts and investigations for Copilot access attempts Security teams can now configure DLP alerts for Microsoft 365 Copilot activity, surfacing attempts to access emails or files with sensitivity labels that match DLP policies. Alert reports include key details like user identity, policy match, and file name, enabling admins to quickly assess what happened, determine if further investigation is needed, and take appropriate follow-up actions. Admins can also choose to notify users directly, reinforcing responsible data use. The rollout will start on June 30 and is expected to be completed by the end of July. Simulation mode for Copilot DLP policies As part of the rollout starting on June 30, simulation mode lets admins test Copilot-specific DLP policies before enforcement. By previewing matches without impacting users, security teams can fine-tune rules, reduce false positives, and deploy policies with greater confidence. Learn more about DLP for Microsoft 365 Copilot here. Extended protection to more Azure data sources AI development is only as secure as the data that feeds it. That’s why Microsoft Purview Information Protection is expanding its auto-labeling capabilities to cover more Azure data sources. Now in public preview, security teams can automatically apply sensitivity labels to additional Azure data sources, including Azure Cosmos DB, PostgreSQL, KustoDB, MySQL, Azure Files, Azure Databricks, Azure SQL Managed Instances, and Azure Synapse. These additions build on existing coverage for Azure Blob Storage, Azure Data Lake Storage, and Azure SQL Database. These sources commonly fuel analytics pipelines and AI training workloads. With auto-labeling extended to more high-value data sources, sensitivity labels are applied to the data before it’s copied, shared, or integrated into downstream systems. These labels help enforce protection policies and limit unauthorized access to ensure sensitive data is handled appropriately across apps and AI workflows. Secure your AI training data, learn how to set up auto-labeling here. Extending data security to the network layer With more sensitive data moving through unmanaged SaaS apps and personal AI tools, your network is now a critical security surface. Earlier this year, we announced the introduction of Purview data security controls for the network layer. With inline data discovery for the network, organizations can detect sensitive data that’s outside of the trusted boundaries of the organization, such as unmanaged SaaS apps and cloud services. This helps admins understand how sensitive data can be intentionally or inadvertently exfiltrated to personal instances of apps, unsanctioned GenAI apps, cloud storage boxes, and more. This capability is now available in public preview — learn more here. Visibility of sensitive data sent through the network also includes insights into how users may be sharing data in risky ways. User activities such as file uploads or AI prompt submissions are captured in Insider Risk Management to formulate richer and comprehensive profiles of user risk. In turn, these signals will also better contextualize future data interactions and enrich policy verdicts. These user risk indicators will become available in the coming weeks. Automate investigation workflows with Alert Triage Agents in DLP and IRM Security teams today face a high volume of alerts, often spending hours sorting through false positives and low priority flags to find threats that matter. To help security teams focus on what’s truly high risk, we’re excited to share that the Alert Triage Agents in Microsoft Purview Data Loss Prevention (DLP) and Insider Risk Management (IRM) are now available in public preview. These autonomous, Security Copilot-powered agents prioritize alerts that pose the greatest risk to organizations. Whether it’s identifying high-impact exfiltration attempts in DLP or surfacing potential insider threats in IRM, the agents analyze both content and intent to deliver transparent, explainable findings. Built to learn and improve from user feedback, these agents not only accelerate investigations, but also improve over time, empowering teams to prioritize real threats, reduce time spent on false positives, and adapt to evolving risks through feedback. Watch the new Mechanics video, or learn more about how to get started here. A unified approach to modern data security Disjointed security tools create gaps and increase operational overhead. Microsoft Purview offers a unified data security platform designed to keep pace with how your organization works with AI today. From endpoints visibility to automated security workflows, Purview unifies data security across your estate, giving you one platform for end-to-end data security. As your data estate grows and AI reshapes the way you work, Purview helps you stay ahead—so you can scale securely, reduce risk, and unlock the full productivity potential of AI with confidence. Ready to unify your data security into one integrated platform? Try Microsoft Purview free for 90 days.1.1KViews2likes0CommentsHelp! Sensitivity label applied to whole tenant mistakenly with Watermark
We create a sensitivity label to have a watermark to be applied on the files on where it assigned but accidentally or due to misconfiguration, the watermark applied to whole tenant and the files, need a solution to automatically removed these watermarks from the files wherever it is applied. Please assist, TIA... .52Views0likes1CommentSharing: All Built-in SIT categorised
So, Microsoft Purview gives you 313 built-in Sensitive Information Types (SITs)—yes, I counted! When I worked with an Cyber Risk auditor, one of their ask was categorizing all the items that we decided for it to be deployed. This was a bit of a nightmare, so I took one for the team and grouped them into three neat categories: PII, Financial, and Medical. Now, I’m sharing it with you so that my struggle can save you the headache. You’re welcome! Download the excel spreadsheet here: All SIT list and their categories.xlsx293Views0likes1CommentRetirement notification for the Azure Information Protection mobile viewer and RMS Sharing App
Over a decade ago, we launched Azure Information Protection (AIP) mobile app for iOS and Android and Rights Management Service (RMS) Sharing app for Mac to fill an important niche in our non-Office file ecosystem to enable users to securely view protected filetypes like (P)PDF, RPMSG and PFILEs outside of Windows. These viewing applications are integrated with sensitivity labels from Microsoft Purview and encryption from the Rights Management Service to view protected non-Office files and enforce protection rights. Today, usage of these app is very low, especially for file types other than PDFs. Most PDF use cases have already shifted to native Office apps and modern Microsoft 365 experiences. As part of our ongoing modernization efforts, we’ve decided to retire these legacy apps. We are officially announcing the retirement of the AIP Mobile and RMS Sharing and starting the 12-month clock, after which it will reach retirement on May 30, 2026. All customers with Azure Information Protection P1 service plans will also receive a Message Center post with this announcement. In this blog post, we will cover what you need to know about the retirement, share key resources to support your transition, and explain how to get help if you have questions. Q. How do I view protected non-Office files on iOS and Android? Instead of one application for all non-Office file types, view these files in apps where you’d most commonly see them. For example, use the OneDrive app or the Microsoft 365 Copilot app to open protected PDFs. Here’s a summary of which applications support each file type: 1) PDF and PPDF: Open protected PDF files with Microsoft 365 Copilot, OneDrive or Edge. These applications have native support to view labels and enforce protection rights. Legacy PPDF files must be opened with the Microsoft Information Protection File Labeler on Windows and saved as PDF before they can be viewed. 2) PFILE: These files are no longer viewable on iOS and Android. PFILEs are file types supported for classification and protection and include file extensions like PTXT, PPNG, PJPG and PXML. To view these files, use the Microsoft Purview Information Protection Viewer on Windows. 3) RPMSG: These files are also no longer viewable on iOS and Android. To view these files, use Classic Outlook on Windows. Q. Where can I download the required apps for iOS, Android or Windows? These apps are available for download on the Apple App Store, Google Play Store, Microsoft Download Center or Microsoft Store. Microsoft 365 Copilot: Android / iOS Microsoft OneDrive: Android / iOS Microsoft Edge: AI browser: Android / iOS Microsoft Purview Information Protection Client: Windows Classic Outlook for Windows: Windows Q. Is there an alternative app to view non-Office files on Mac? Before May 30, 2026, we will release the Microsoft Purview Information Protection (MPIP) File Labeler and Viewer for Mac devices. This will make the protected non-Office file experience on Mac a lot better with the ability to not only view but modify labels too. Meanwhile, continue using the RMS Sharing App. Q. Is the Microsoft Purview Information Protection Client Viewer going away too? No. The Microsoft Purview Information Protection Client, previously known as the Azure Information Protection Client, continues to be supported on Windows and is not being retired. We are actively improving this client and plan to bring its viewing and labeling capabilities to Mac as well. Q. What happens if I already have RMS Sharing App or AIP Mobile on my device? You can continue using these apps to view protected files and download onto new devices until retirement on May 30, 2026. At that time, these apps will be removed from app stores and will no longer be supported. While existing versions may continue to function, they will not receive any further updates or security patches. Q. I need more help. Who can I reach out to? If you have additional questions, you have a few options: Reach out to your Microsoft account team. Reach out to Microsoft Support with specific questions. Reach out to Microsoft MVPs who specialize in Information Protection.