AI agents are here and are reshaping how we work. These autonomous tools can handle repetitive tasks, analyse data in real-time, and drive efficiency.
For many leaders, this brings new opportunities but also creates a healthy dose of concern. How do you embrace the benefits of AI agents without exposing your organisation to unnecessary risk?
A recent Microsoft report, “Administering and Governing Agents,” offers a clear framework.
It provides practical steps to build a secure and effective governance strategy. This article breaks down the key insights from the report.
Who Creates Agents?
Microsoft identifies three main groups of people who will create agents, each requiring a different level of oversight:
- End Users: Individuals who can use intuitive tools like Agent Builder in Copilot and SharePoint to create basic agents for their daily tasks. These agents operate within existing user permissions, making them a low-risk starting point.
- Makers: Tech-savvy employees can use tools like Copilot Studio to build more advanced, automated solutions. They can create agents that respond to triggers and handle complex workflows.
- Developers: Technical experts use professional development tools, such as VS Code and Azure AI Foundry, to build fully customised, powerful agents that can be integrated across the organisation.
AI agent governance isn’t a one-size-fits-all approach. It requires a layered strategy that manages the tools people use, the content agents can access, and the lifecycle of these agents.
Governance Toolkit
Microsoft’s framework applies robust controls at every level using integrated tools within the Microsoft 365 ecosystem.
1. Tool Controls: Managing What People Can Build
The first step is to control what tools are used to create agents.
The Microsoft 365 Admin Centre and Power Platform Admin Centre are your central hubs.
Here, you can define who in your organisation gets to build agents and what features they can use.
For instance, you can ensure that agents created by End Users remain simple, while giving Makers in your business added flexibility to innovate within safe boundaries using Copilot Studio.
2. Content Controls: Protecting Your Data
One of the biggest concerns with AI is data security. How do you ensure agents can’t access or share sensitive information?
For agents built using SharePoint, the principle is simple – agent permissions are the same as the user’s. This means an agent can only access the data the individual is already authorised to see, ensuring that your established access rules are respected by default.
For advanced data protection, Microsoft provides a suite of powerful tools. Microsoft Purview acts as your data guardian, and a key feature within it are Data Loss Prevention (DLP) policies.
DLP policies allow you to automatically identify, monitor, and protect sensitive information across your organisation. In the context of AI agents, this means you can:
Block access to sensitive files: If a document has a “Highly Confidential” sensitivity label, a DLP policy can prevent an agent from processing it entirely.
Prevent data exfiltration: DLP can stop agents from sharing sensitive data in chats or other outputs, protecting against accidental or malicious leaks.
Enforce compliance: These policies help ensure your organisation adheres to industry regulations and internal data handling standards
3. Agent Management: Overseeing Agent Activity
Finally, you need to manage the agents themselves.
The Microsoft 365 Admin Centre provides a complete inventory of all agents in your organisation. From this central dashboard, you can monitor usage, control deployment, and manage costs.
Microsoft Sentinel provides a bird’s-eye view of your agent activities for advanced security monitoring and threat response. As a cloud-native security information and event management platform, Sentinel allows your security teams to:
Monitor Agent Interactions: Keep a close watch on agent activities in real-time to detect suspicious behaviour.
Detect and Respond to Threats: Use intelligent analytics to identify potential security threats, such as an unusual number of prompts containing sensitive data from a single user.
Automate Responses: Set up automated workflows to respond instantly to security incidents, helping to mitigate risks before they escalate.
Getting Started: A Simple, Three-Phase Approach
The report recommends a phased approach to get started with agent governance:
- Phase 1: Form a Team. Create a small “agent adoption champion” team within IT. Provide them with the tools to experiment with building agents, establish best practices, and understand the governance controls firsthand.
- Phase 2: Train Employees. Start training a wider group on how to build simple, effective agents. Establish a Centre of Excellence led by your champion team to provide guidance and ensure consistency.
- Phase 3: Deploy and Engage. Identify key people in various departments to become agent makers. Provide them with the necessary tools, like Copilot Studio, and set up clear guidelines for sharing and usage.
Following this structured path, you can empower people to innovate while maintaining tight control over security and compliance.
Next Steps
AI agents offer a significant opportunity to save time and reduce costs. Implementing a robust governance framework addresses the risks, builds trust, allowing you to realise the potential of AI in a secure and controlled way.
To explore the technical details and tools mentioned, download the full Microsoft guide.
Related
- AI in Action: Insights from Microsoft’s Work Trends Index 2025
- Navigating AI Adoption: Strategies for Success
- Drive Growth with Agentic AI: Key Insights from Microsoft’s Agents of Change Report 2025