Skip to content
← Retour au catalogue
Cloudmoyencommunity

azure-ai-contentsafety-py

SDK Azure AI Content Safety pour Python. À utiliser pour détecter le contenu nuisible dans le texte et les images avec classification multi-sévérité.

Le contenu de ce skill est dans sa langue d’origine (souvent l’anglais).

Azure AI Content Safety SDK for Python

Detect harmful user-generated and AI-generated content in applications.

Installation

pip install azure-ai-contentsafety

Environment Variables

CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
CONTENT_SAFETY_KEY=<your-api-key>

Authentication

API Key

from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
import os

client = ContentSafetyClient(
    endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
    credential=AzureKeyCredential(os.environ["CONTENT_SAFETY_KEY"])
)

Entra ID

from azure.ai.contentsafety import ContentSafetyClient
from azure.identity import DefaultAzureCredential

client = ContentSafetyClient(
    endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
    credential=DefaultAzureCredential()
)

Analyze Text

from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
from azure.core.credentials import AzureKeyCredential

client = ContentSafetyClient(endpoint, AzureKeyCredential(key))

request = AnalyzeTextOptions(text="Your text content to analyze")
response = client.analyze_text(request)

# Check each category
for category in [TextCategory.HATE, TextCategory.SELF_HARM, 
                 TextCategory.SEXUAL, TextCategory.VIOLENCE]:
    result = next((r for r in response.categories_analysis 
                   if r.category == category), None)
    if result:
        print(f"{category}: severity {result.severity}")

Analyze Image

from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
from azure.core.credentials import AzureKeyCredential
import base64

client = ContentSafetyClient(endpoint, AzureKeyCredential(key))

# From file
with open("image.jpg", "rb") as f:
    image_data = base64.b64encode(f.read()).decode("utf-8")

request = AnalyzeImageOptions(
    image=ImageData(content=image_data)
)

response = client.analyze_image(request)

for result in response.categories_analysis:
    print(f"{result.category}: severity {result.severity}")

Image from URL

from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData

request = AnalyzeImageOptions(
    image=ImageData(blob_url="https://example.com/image.jpg")
)

response = client.analyze_image(request)

Text Blocklist Management

Create Blocklist

from azure.ai.contentsafety import BlocklistClient
from azure.ai.contentsafety.models import TextBlocklist
from azure.core.credentials import AzureKeyCredential

blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key))

blocklist = TextBlocklist(
    blocklist_name="my-blocklist",
    description="Custom terms to block"
)

result = blocklist_client.create_or_update_text_blocklist(
    blocklist_name="my-blocklist",
    options=blocklist
)

Add Block Items

from azure.ai.contentsafety.models import AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem

items = AddOrUpdateTextBlocklistItemsOptions(
    blocklist_items=[
        TextBlocklistItem(text="blocked-term-1"),
        TextBlocklistItem(text="blocked-term-2")
    ]
)

result = blocklist_client.add_or_update_blocklist_items(
    blocklist_name="my-blocklist",
    options=items
)

Analyze with Blocklist

from azure.ai.contentsafety.models import AnalyzeTextOptions

request = AnalyzeTextOptions(
    text="Text containing blocked-term-1",
    blocklist_names=["my-blocklist"],
    halt_on_blocklist_hit=True
)

response = client.analyze_text(request)

if response.blocklists_match:
    for match in response.blocklists_match:
        print(f"Blocked: {match.blocklist_item_text}")

Severity Levels

Text analysis returns 4 severity levels (0, 2, 4, 6) by default. For 8 levels (0-7):

from azure.ai.contentsafety.models import AnalyzeTextOptions, AnalyzeTextOutputType

request = AnalyzeTextOptions(
    text="Your text",
    output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
)

Harm Categories

CategoryDescription
HateAttacks based on identity (race, religion, gender, etc.)
SexualSexual content, relationships, anatomy
ViolencePhysical harm, weapons, injury
SelfHarmSelf-injury, suicide, eating disorders

Severity Scale

LevelText RangeImage RangeMeaning
0SafeSafeNo harmful content
2LowLowMild references
4MediumMediumModerate content
6HighHighSevere content

Client Types

ClientPurpose
ContentSafetyClientAnalyze text and images
BlocklistClientManage custom blocklists

Best Practices

  1. Use blocklists for domain-specific terms
  2. Set severity thresholds appropriate for your use case
  3. Handle multiple categories — content can be harmful in multiple ways
  4. Use halt_on_blocklist_hit for immediate rejection
  5. Log analysis results for audit and improvement
  6. Consider 8-severity mode for finer-grained control
  7. Pre-moderate AI outputs before showing to users

When to Use

This skill is applicable to execute the workflow or actions described in the overview.

Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.
— Field Manual

Les 1 441 skills, démystifiés en un PDF.

Un guide éditorial gratuit que nous avons écrit pour le Skills Atlas : taxonomie, les 25 skills indispensables, anti-patterns, parcours d’apprentissage par profil.

  • 70+ pages, sommaire, prêt à imprimer.
  • Envoyé par email — lien valide 7 jours.
  • Désabonnement en un clic à tout moment.

Pas de spam. Email jamais partagé. Désabonnement en un clic.