Link Centre - Search Engine and Internet Directory

Helping to share the web since 1996

Why Microsoft is urging caution as AI gets more autonomous

AI is rapidly moving beyond basic chatbots and into a new phase: AI agents that don’t just respond - but act. And according to Microsoft, that shift comes with serious risks.

a cell phone sitting on top of a laptop computer

Scott Hanselman, Microsoft’s Vice President of Developer Community, recently spoke on CBS News about what happens when AI systems are given too much control over our digital environments.

What’s changing - and why it matters:

From assistants to actors
Traditional large language models generate responses when prompted. AI agents, on the other hand, are designed to perform actions - handling tasks, making choices, and interacting directly with apps, accounts, and systems.

The danger of “assertive” AI
Hanselman cautions that as these agents gain broader permissions, they may start taking steps users never clearly authorized. This “assertive” behavior could lead to unintended actions across personal or professional platforms.

Permission is the real risk
The biggest vulnerability isn’t the AI itself - it’s how much access we give it. Granting deep permissions to sensitive data, financial tools, or work systems increases the chances of mistakes, misuse, or security failures.

Rethinking digital safety
This new era demands a different approach to security. Users and organizations must define strict limits on what AI agents can do and where human approval is non-negotiable.

The bottom line

AI agents have the potential to massively boost productivity. But without clear guardrails and careful access control, they could evolve from powerful helpers into unpredictable liabilities.

Newer Articles

Older Articles

← Back to News Headlines