Overview
User Level Risk provides visibility into employee AI usage across the organization.
Powered by the Microsoft Defender Shadow AI connector, this view helps identify employees who may be using unregistered AI services or accessing AI tools frequently.
By comparing Microsoft Defender activity with the AI Inventory, User Level Risk highlights where AI services are being used without formal registration, review, or approval.
This allows organizations to identify potential governance risks early and take appropriate action.
This functionality is available on paid plans with Microsoft Defender integration enabled.
Purpose
User Level Risk enables organizations to:
- identify employees accessing AI services that are not registered in the AI Inventory
- detect frequent AI usage that may indicate elevated governance risk
- add employees into Clarity AI if they are not already registered
- take early action to manage AI usage before risks increase
This view helps ensure AI usage is transparent, traceable, and aligned with governance requirements.
How It Works
User Level Risk is powered by Microsoft Defender integration.
AI usage activity detected by Microsoft Defender is securely ingested into Clarity AI and automatically compared against:
- AI Inventory
- monitored AI domain list
This comparison determines whether accessed AI services are registered, approved, or unregistered.
Based on this analysis, a corresponding user risk level is assigned.
Main List View
The main User Level Risk page displays employees identified through Microsoft Defender AI activity signals.
For each employee, the following information is displayed:
- Risk Level – High, Medium, or Low based on AI usage behaviour
- Unregistered AI – number of AI domains accessed that are not registered in the AI Inventory
If an identified employee has not yet been added to Clarity AI, they can be added directly from this view.
Risk levels are automatically calculated based on usage frequency and registration status.
Risk Levels Explained
Risk levels are determined by how often AI services are accessed and whether those services are registered in the AI Inventory.
High Risk
- any single AI service accessed more than 6 times, or
- 5 or more unregistered AI domains accessed
Medium Risk
- 3 to 4 unregistered AI domains accessed
Low Risk
- 1 to 2 unregistered AI domains accessed
An AI domain is considered unregistered when there is no exact match in the AI Inventory.
Domains marked as “Possible Match” are treated as unregistered until formally registered.
User Detail View
Selecting an employee opens a detailed view of their AI usage activity.
From this view, users can:
- see which AI domains were accessed
- review frequency of usage for each AI service
- identify whether each domain corresponds to a registered AI system
- filter results by Approval Status to focus on approved, pending, or unregistered services
This detailed visibility helps investigate usage patterns and determine whether governance actions are required.
Notes
- User Level Risk visibility depends on an active Microsoft Defender connection
- only AI services detected through Microsoft Defender activity appear in this view
- “Possible Match” entries must be formally registered to be considered governed
- this page helps identify Shadow AI usage and strengthen governance oversight
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article