Secure your AI transformation with Microsoft Security

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.

Generative AI is reshaping business today for every individual, every team, and every industry. Organizations engage with GenAI in a variety of ways – from purchasing and using finished GenAI apps to developing, deploying, and operating custom-built GenAI apps.

 

GenAI broadens the attack surface of applications through prompts, training data, models, and more – thereby effectively changing the threat landscape with new risks such as direct or indirect prompt injection attacks, data leakage, and data oversharing.

 

In March this year, we shared how Microsoft Security helps organizations discover, protect, and govern the use of GenAI apps like Copilot for M365. Today, we’re thrilled to introduce additional capabilities for that scenario and new capabilities to secure and govern the development, deployment, and runtime of custom-built GenAI apps.

 

With these new innovations, Microsoft Security is at the forefront of AI security to support our customers on their AI journey by being the first security solution provider to offer threat protection for AI workloads and providing comprehensive security to secure and govern AI usage and applications.

 

Secure and govern GenAI you build:

  • Discover new AI attack surfaces with AI security posture management (AI-SPM) in Microsoft Defender for Cloud for AI apps using Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock
  • Protect your AI apps using Azure OpenAI in runtime with threat protection for AI workloads in Microsoft Defender for Cloud, the first cloud-native application protection platform (CNAPP) to provide runtime protection for enterprise-built AI apps using Azure OpenAI Service

Secure and govern GenAI you use:

  • Discover and mitigate data security and compliance risks with Microsoft Purview AI Hub, now offering new insights, including visibility into unlabeled data and SharePoint sites that are referenced by Copilot for M365 and non-compliant usage such as regulatory collusion, money laundering, and targeted harassment for M365 interactions
  • Govern AI use to comply with regulatory requirements with 4 new AI compliance assessments in Microsoft Purview Compliance Manager

 

Discover new AI attack surfaces

As organizations embrace GenAI, many accelerate adoption with pre-built GenAI applications while others choose to develop GenAI applications in-house, tailored to their unique use cases, security controls and compliance requirements. Organizations from all industries are racing to transform their applications with AI, with over half of Fortune 500 companies using Azure OpenAI.

 

With all the new components of AI workloads such as models, SDKs, training, and grounding data – the visibility into understanding all the configurations of these new components and the risks associated with them is more important than ever. 

 

With new AI security posture management (AI-SPM) capabilities in Microsoft Defender for Cloud, security admins can continuously discover and inventory their organization’s AI components across Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock – including models, SDKs, and data – as well as sensitive data used in grounding, training, and fine tuning LLMs. Admins can find vulnerabilities, identify exploitable attack paths, and easily remediate risks to get ahead of active threats.

 

Figure 1: Attack path analysis in Defender for Cloud identifies an indirect risk to an Azure OpenAI resource where an attacker can exploit vulnerabilities via an internet exposed VM to potentially gain access and control of the AI resource, model deployments, and data.Figure 1: Attack path analysis in Defender for Cloud identifies an indirect risk to an Azure OpenAI resource where an attacker can exploit vulnerabilities via an internet exposed VM to potentially gain access and control of the AI resource, model deployments, and data.

 

By mapping out AI workloads and synthesizing security signals such as identity, data security, and internet exposure, Defender for Cloud will continuously surface contextualized security issues, exploitable attack paths, and suggest risk-based security recommendations tailored to prioritize critical gaps across your AI workloads.


For example, many AI apps are made up of a dynamic and complex supply chain, of AI artifacts such as SDKs, plugins, models, grounding and training data. If there is an older version of LangChain with vulnerabilities within your environment, an attacker can easily exploit the vulnerability to access sensitive data. Maintaining visibility into the AI inventories and components, as well as associated risks is now more critical than ever.

 

Protect your custom-built GenAI apps against emerging cyberthreats

While having a strong security posture reduces the risk of attacks, the complex and dynamic nature of AI requires active monitoring in runtime as well. As AI expands capabilities of cloud-native applications, it also extends an application’s attack surface, making it susceptible to emerging threats such as prompt injection attacks, secrets and sensitive data leaks, and denial of service attacks. Organizations will need comprehensive security controls that enable them to secure their AI applications throughout their lifecycle – development, deployment, and runtime against threats unique to AI applications.

 

To help address the new AI attack landscape, organizations can now use Microsoft Defender for Cloud to protect their AI workloads from threats. Security teams can detect threats to AI workloads using Azure OpenAI Service, alerting SOC teams to potentially malicious activity on their AI workloads such as prompt injection attacks, credential theft, and sensitive data leakage.

 

The new threat protection capabilities leverage a native integration with Azure AI Content Safety prompt shields and Microsoft threat intelligence signals to deliver contextual and actionable alerts in Defender for Cloud that help SOC analysts understand user behavior with visibility into supporting evidence such as IP address, model deployment details, and segments of suspicious user prompts that triggered the alert.

 

Jailbreak attacks, for example, aim to alter the designed purpose of the model, making the application susceptible to data breaches and denial of service attacks. With Defender for Cloud, SOC analysts will be alerted to blocked prompt injection attempts with context and evidence to the IP and activity, with action steps to follow. The alert also includes recommendations to prevent future attacks on the affected resources and strengthen the security posture of the AI application.

 

Figure 2: Security alert on a jailbreak attempt in Defender for Cloud provides context to the potential causes, the IP used, model deployment impacted, and provides investigation steps to address the potentially malicious activity.Figure 2: Security alert on a jailbreak attempt in Defender for Cloud provides context to the potential causes, the IP used, model deployment impacted, and provides investigation steps to address the potentially malicious activity.

 

By leveraging the evidence provided, SOC teams can classify the alert, assess the impact, and take precautionary steps to fortify the application.

 

Microsoft Defender for Cloud can help organizations strengthen their GenAI security posture and defend against threats to their GenAI applications as part of its market-leading cloud-native application protection platform (CNAPP).

 

Learn more about securing GenAI applications with Defender for Cloud.

 

Discover and mitigate data security and compliance risks with Microsoft Purview AI Hub

In addition to securing GenAI applications, organizations now face one of the most significant challenges – securing and governing data in the era of AI. An alarming 80% of leaders cite the leakage of sensitive data as their primary concern. Security teams often find themselves in the dark when it comes to data security and compliance risks associated with GenAI usage. Without clear visibility into the risks, organizations struggle to safeguard their assets effectively.

 

To help organizations gain a better understanding of AI application usage and the associated risks – we are announcing the public preview of Microsoft Purview AI Hub. AI Hub provides insights like sensitive data in Copilot prompts, the number of users interacting with AI apps and their associated risk level. Admins can gain additional detailed usage insights in Activity explorer to see the files referenced by Copilot and the sensitivity of them.

 

With this public preview, we introduce new insights into unlabeled sensitive data and non-compliant usage within Copilot for Microsoft 365. As organizations adopt Copilot, data security controls become paramount to avoid potential overexposure of sensitive data or SharePoint sites. Microsoft Purview AI Hub addresses this challenge by surfacing unlabeled files and SharePoint sites referenced by Copilot, helping you prioritize your most critical data risks and prevent potential oversharing of sensitive data. Additionally, AI Hub will also provide non-compliant usage insights to discover unethical use in AI interactions that may violate code-of-conduct or regulatory requirements, such as hate or discrimination, corporate sabotage, money laundering, and more.

 

Figure 3: Gain insights into unlabeled files and SharePoint sites referenced in Copilot responses in Microsoft Purview AI HubFigure 3: Gain insights into unlabeled files and SharePoint sites referenced in Copilot responses in Microsoft Purview AI Hub

Once organizations gain insight into the risks, they can more effectively design and implement policies to mitigate them. For example, when admins identify potential sensitive data that may be overshared through Copilot, they can create labeling and DLP policies to secure it. As previously announced last November, Copilot honors sensitivity label policies, such as encryption, ensuring that its generated responses are dependent on a user's permissions and the response will automatically inherit the label policies. Lastly, the ready-to-use policies enable admins to configure data security controls in a few clicks to protect data and prevent data loss in AI prompts and responses. These natively integrated data security controls enable organizations to safeguard sensitive data throughout its lifecycle with Copilot.

 

Govern AI usage to comply with regulatory and code-of-conduct policies

Lastly, as new AI regulations and standards continue to emerge, such as the EU AI Act, NIST AI RMF, ISO/IEC 23894:2023 and ISO/IEC 42001, they are shaping the AI governance landscape to ensure AI systems are developed and used in a manner that is safe, transparent, and responsible. When adopting AI solutions, organizations need to comply with these regulations to not only avoid penalties but also to reduce their security, compliance and governance risks. Yet, 55% of leaders lack understanding of how AI is and will be regulated[1] and are seeking  guidance on how to adhere to these requirements.

 

Today, we are excited to announce 4 new Microsoft Purview Compliance Manager assessment templates to help your organization assess, implement, and strengthen its compliance against AI regulations, including EU AI Act, NIST AI RMF, ISO/IEC 23894:2023 and ISO/IEC 42001. Each assessment provides control guidance and recommended actions. For instance, to adhere to NIST AI RMF control (Govern 1.5), organizations receive step-by-step guidance on configuring audit logs for AI interactions to safeguard against misuse. Compliance Manager guidance will also be surfaced in a card within the Microsoft Purview AI Hub.

 

Figure 4: Get guided assistance to AI regulations in the Microsoft Purview AI HubFigure 4: Get guided assistance to AI regulations in the Microsoft Purview AI Hub

 

Explore more resources for securing and governing AI

With our new capabilities to support your secure AI transformation, Microsoft becomes and only security service provider to deliver broad AI security capabilities, including security posture management, threat protection, data security and compliance, app governance, access and endpoint management for AI. Below are additional resources to further your understanding and help you begin with these new capabilities:

  • Read our blog about securing AI applications with Microsoft Defender for Cloud.
  • Learn more about Microsoft Defender for Cloud innovations at RSA.
  • Get started with Microsoft Defender for Cloud.
  • Try Microsoft Purview AI Hub, which will start rolling out in public preview to customer tenants starting May 6th!
  • Licensing for AI Hub is still being determined but you can try it today by activating a free trial. Visit your Microsoft Purview compliance portal to activate a free trial. An active Microsoft 365 E3 subscription is required as a prerequisite to a free trial.
  • Read our blog - Secure and govern AI usage with Microsoft Purview.
  • Watch our Microsoft Secure event product demos for securing and governing AI usage.
  • Learn more about Microsoft Purview AI Hub.
  • Learn more about how to secure and govern Copilot with Microsoft Purview.
  • Learn more about how to secure and govern generative AI apps with Microsoft Purview.
  • Get started on zero trust by preparing your environment for AI.

 

[1] Business rewards vs. security risks, n=400, Q3 2023, ISMG

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.