Comprehensive AI Safety and Security with defense in depth for Enterprises

This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .

Jaswant_Singh_0-1720821768234.png

Jaswant_Singh_1-1720821812382.png

Azure AI Content Safety APIs

Azure AI Content Safety is a new service that helps detect hateful, violence, sexual, and self-harm content in images and text, and assign severity scores, allowing businesses to limit and prioritize what content moderators need to review. Unlike most solutions used today, Azure AI Content Safety can handle nuance and context, reducing the number of false positives and easing the load on human content moderator teams.

 

Prompt Shields (preview)

Identify and block direct and indirect prompt injection attacks before they impact your model, scans text for the risk of a User input attack on a Large Language Model. Quickstart

Groundedness detection (preview)

It detect model “hallucinations” so you can block or highlight ungrounded responses, detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Quickstart

Protected material text detection (preview)

Blocks copyrighted or known content like song lyrics, scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). Quickstart

Custom categories (rapid) API (preview)

Create and deploy your own content filters, lets you define emerging harmful content patterns and scan text and images for matches. How-to guide

Analyze text API Scans text for sexual content, violence, hate, and self harm with multi-severity levels.
Analyze image API Scans images for sexual content, violence, hate, and self harm with multi-severity levels.

Jaswant_Singh_4-1720822016607.png

 

Jaswant_Singh_5-1720822044695.png

Jaswant_Singh_6-1720822073748.png

Jaswant_Singh_7-1720822106348.png

Jaswant_Singh_8-1720822130944.png

Jaswant_Singh_9-1720822156847.pngResources

https://aka.ms/genai-gateway

https://github.com/Azure-Samples/genai-gateway-apim

https://aka.ms/apim-genai-Lza

https://aka.ms/azai

https://aka.ms/SecuringAI/Build

https://aka.ms/mdc4AI/RSA2024

https://aka.ms/purviewai/developerblog

https://github.com/Azure/PyRIT

azure-sdk-for-python/sdk/contentsafety/azure-ai-contentsafety/samples at main · Azure/azure-sdk-for-python (github.com)

azure-sdk-for-net/sdk/contentsafety/Azure.AI.ContentSafety/samples at main · Azure/azure-sdk-for-net (github.com)

Microsoft Threat Modeling Tool overview - Azure | Microsoft Learn

Configure GitHub Advanced Security for Azure DevOps features - Azure Repos | Microsoft Learn

Enterprise AppSec with GitHub Advanced Security

What is Azure AI Content Safety? - Azure AI services | Microsoft Learn

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.