Risk Triage

Modified on Tue, 14 Apr at 4:28 PM

Overview 

Risk Triage enables organizations to perform a quick assessment of potential risks associated with each AI system in their inventory. 

It helps administrators identify whether further privacy or risk assessments are required based on system usage, data sensitivity, and operational impact. 

This functionality is available within each AI System record under: 

Risk Assessment → Risk Triage   

Purpose 

Risk Triage provides a streamlined method to evaluate key risk factors early in the governance process. 

By completing a short questionnaire, administrators can determine whether an AI system presents privacy, ethical, or operational risks that require additional review. 

This helps ensure that AI systems handling personal data, supporting critical operations, or operating in higher-impact domains are consistently identified for oversight.   

Key Features 

Quick Risk Assessment 

The Risk Triage form presents a concise set of indicators designed to identify high-level risk characteristics such as data sensitivity, user exposure, and business criticality. 

Users complete a short questionnaire to determine whether an AI system: 

  • processes personal or sensitive data 
  • serves external users 
  • supports functions important to business continuity 

Based on responses, the system displays contextual guidance including: 

  • Privacy Risk Assessment Necessary 
  • Additional Risk Assessment Necessary 

These prompts help identify when further detailed assessments are required.   

AI Use Categories 

Risk Triage includes an AI Use checklist to capture how AI is applied. 

Users can select one or more categories describing the system’s purpose across domains such as biometric analysis, eligibility decisions, critical infrastructure, healthcare, law enforcement, or generative media. 

Examples include: 

  • Emotion recognition 
  • Biometric data processing 
  • Employment decisions 
  • Customer eligibility or access 
  • Credit or insurance underwriting 
  • Justice, law enforcement, or immigration use 
  • Manipulative or deceptive UX 
  • Synthetic media without labels 
  • Critical infrastructure control 
  • Government automated decisions 

Selecting applicable categories supports classification into regulated or higher-risk domains.   

Connection Management 

Each integration type includes configuration fields such as: 

  • API keys 
  • access tokens 
  • tenant IDs 

Connections can be created, deactivated, or removed as required.   

Notes 

  • administrators can manage which AI use categories appear in the checklist under Settings → AI Use Categories 
  • enabled categories are available when categorizing AI systems 
  • all categories are enabled by default 
  • once triage is complete, the system provides visual feedback indicating whether additional privacy or risk assessments are required 
  • best practice is to complete Risk Triage for each new or updated AI system before approval or deployment to maintain a consistent risk baseline across the AI inventory  

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article