
Go beyond isolated tools. Turn your data, information assets and code into unified institutional memory.

The AI agentic swarm that closes the loop on quality assurance.Transform testing from a manual gate into a background process.

The intelligence layer for high-volume recruitment. Identify, vet, and match elite talent to your specific business needs with AI-driven precision.

Scale your global team without the risk. Olive automates compliance, attendance, and local labor laws, ensuring your operations never miss a beat.
Share:








Share:




Share:




Your AI agent has more access to your business than your CISO does. Do you know what it can read? Do you know what it can rewrite?
Most teams do not. Not because they are careless. Because nobody asked those questions before the deployment went live.
That is the actual story behind the McKinsey breach.
Last week, security researchers at CodeWall pointed an autonomous agent at McKinsey’s internal AI platform, Lilli. Two hours later, it had full read and write access to 46.5 million internal chat messages covering M&A deals and client strategy, 728,000 confidential files, and the system prompts governing how the AI behaved with McKinsey’s 40,000 employees. The attack required no credentials, no insider access, and no human involvement after the agent selected its own target.
The entry point was a SQL injection vulnerability. A class of bug that has existed since the 1990s, sitting undetected in a production AI platform for two years.
“This is not a story about a sophisticated attack. It is a story about what happens when you build a powerful AI system on top of infrastructure that nobody hardened.”
The timing matters. According to a new identity report published this week, 95 percent of enterprises now run autonomous AI agents in production performing real operational and security tasks. That number moved from single digits to near-universal in under twelve months. Meanwhile, Cohesity and Datadog are now selling rollback tools specifically designed to undo damage caused by bad agent actions. A market that did not exist two years ago.
Legacy identity and access management was built around one mental model: a human logs in, does their work, logs out. Agents break every assumption that model was built on.
A single agent can act across your entire system stack simultaneously, hold authority delegated from a human executive, and trigger financial or operational workflows that nobody explicitly approved. When something goes wrong, the liability lands on your P&L, not on a service account. The McKinsey breach made this concrete: the agent that read the data had the same identity as the agent that could rewrite the system prompts. One set of credentials. Full access. No separation of duties.
There is also a new attack surface most security teams are not accounting for yet: the prompt layer. Because Lilli stored its system prompts in the same database as everything else, write access meant an attacker could silently alter what the AI told 40,000 consultants, with no code deployment and no security alert. As CodeWall put it: “just a single UPDATE statement wrapped in a single HTTP call.”
NVIDIA’s internal security team shared a practical rule on the Latent Space podcast this week. Agents have three capabilities: file access, internet access, and code execution. NVIDIA allows any two at once, never all three. It also runs internal models on private clusters. No company data enters a model the organization does not control.
Most enterprises have not made these decisions deliberately. They made them by default, under deadline pressure, and the defaults were chosen for deployment speed rather than security posture.
The organizations getting this right share a few design principles in common:
The McKinsey breach will not be the last high-profile failure of this kind. Production agent deployments are accelerating faster than the governance discipline surrounding them, and that gap has a predictable outcome.
At Optimum Partners, we assess agent architectures across enterprise environments every week. The pattern is consistent. The deployment moved fast and the controls were inherited from a legacy stack that was never designed for autonomous systems.
Before your next agent goes into production, three questions deserve a clear answer:
The organizations that operate securely at agent scale are the ones that treat these as architecture decisions, not IT policy questions. If you want an outside perspective on where your current deployment stands, visit the Optimum Partners Innovation Center.
Share:





We’ve helped teams ship smarter in AI, DevOps, product, and more. Let’s talk.
Actionable insights across AI, DevOps, Product, Security & more