Site Title

Two versions of your business. Your AI got the wrong one.

Linkedin
x
x

Two versions of your business. Your AI got the wrong one.

Publish date

Publish date

Your AI pilot is stalled. The system worked in the demo. The system breaks in production. Everyone in the room knows the AI is not the problem. 

There are two versions of your business. The one in your documents, your policies, your training materials, your knowledge base. And the one your senior people actually run, every day, with the exceptions, workarounds, judgment calls, and unwritten rules that decide what actually happens.

Your AI got the first one. It needed the second.

This gap is the variable that decides whether the investment produced anything. Not the vendor. Not the platform. Not the compute. Whether the AI was deployed against an account of your business that included the half nobody wrote down. Most are not. The work of capturing that other half is the prerequisite nobody quoted, nobody scoped, and nobody named when the contract was signed.

The version your AI cannot see.

Most executives think of their business as documented. There are policies, procedures, runbooks, training materials, dashboards. That is the surface. AI can act on this directly. Underneath sit three layers your AI cannot read off any document.

  1. Decision logic. The conditions under which the policy applies, gets overridden, gets exceptioned. The “if X happens, do Y” the policy implies but never wrote. Most decision logic lives as scattered comments inside workflow tools, as informal guidance shared verbally, as the cumulative judgment of whoever has been running the workflow longest.
  2. Exception logic. The things the senior person catches because they have been here twelve years. The covenant that never got refiled in 2019, because the original deal team verbally agreed it would. The procedure code that maps to a different reimbursement tier under one HMO contract type and not another. The compliance flag the reviewer adds on Thursdays because the audit cycle runs Friday. None of it in the document. All of it load bearing.
  3. Unwritten constraint. The things the company will never do, the lines nobody crosses, the cultural rules that operate as hard limits without ever being articulated. The institutional taste that decides which exceptions are tolerated and which are not.

A system trained on the policy can act on the policy. It cannot act on the other three layers, because the other three layers were never written down for it. When the AI meets a real workflow, it acts on the document and breaks against the reality. The pilot stalls. The diagnosis says the system needs more data, or a different vendor, or better prompts. The actual problem is that the company never made the other three layers readable to anything except the people who already know them.

The decision your AI cannot see.

A credit officer at a regional bank opens a borrower’s file and adds a note. Covenant package never got refiled in 2019. Original deal team verbally agreed it would. Treat as compliant. That note is the reason the loan got booked. It is not in any underwriting policy. It is not in any system the bank’s loan agent was given access to. The credit officer has done versions of this work hundreds of times in the years she has been at the bank. None of those decisions are in any document.

The AI agent the bank deployed for loan origination performs cleanly on the documented portion of the policy and stalls on this file. It flags the missing covenant as risk. It cannot see the note. It does not know the deal team’s verbal agreement. The credit officer reviews the flag, sighs, and overrides it. She has now done that twenty times this week.

The bank’s pilot dashboard shows the AI processing applications faster. The credit officer’s calendar shows her time has not gone down. Both numbers are correct. They describe two different versions of their job.

This is not a model problem.

The MIT NANDA initiative published the number every CFO has now read. Across roughly 300 enterprise AI deployments, despite an estimated $30 to $40 billion in spend, 95% produced no measurable P&L impact. The diagnosis was not capability. Not infrastructure. Not talent. The systems failed to adapt to the workflows they were dropped into.

Independent analysis of fourth quarter earnings reached the same place from a different angle. No meaningful relationship between AI adoption and productivity at the economy wide level. A 30% median gain in the specific localized use cases at companies that bothered to quantify them. Only 1% of S&P 500 management teams quantified impact at the earnings level.

The shape across the data is the same. The AI is not the variable. The variable is whether the version of the business the AI was deployed against actually ran the business.

The work no vendor quoted you.

The economics of enterprise AI sales reward the destination. The pitch sells the agent, the platform, the dashboard. The work that has to happen first lives outside the vendor’s incentive structure.

That work is unglamorous and operational. Three things in particular.

The exception logic gets elicited. Senior people sit in structured sessions, the conditions under which policy gets overridden are extracted, written down, validated against actual cases. The semantic structure gets built. Customer in the CRM, customer in billing, customer in the regulated reporting feed: same word, three different definitions, no mapping. Until the mapping exists, no agent can reason coherently across the systems. The historical decisions get captured. Not as anecdotes. As structured rules an AI can act on.

This is interview work, audit work, anthropology work. It is hard to scope cleanly. It is hard to bill against a clean SOW. It does not demo well. So the proposal lands without it. The pilot launches without it. The senior people who hold the exception logic in their heads are not on the project plan. The AI meets the documented half of the business, performs as expected, and stalls when the workflow forks into territory the document did not cover.

The 95% is not a model capability problem. It is the predictable result of skipping the layer of work that nobody told the buyer they were skipping.

What it looks like when the prerequisite is the project.

The proof shows up most clearly in companies that built the prerequisite years before the agents arrived. Stripe runs roughly 1,300 autonomous pull requests through its internal coding agents every week. The reason is not the AI they chose. It is the documentation, the codebase rules, the internal tool servers, and the years of developer infrastructure that made the codebase legible to anything other than the engineers who already knew it. The agents inherited a structured environment.

Most enterprise codebases, and most of the businesses they sit inside, did not.

When the prerequisite gets treated as the project, three things change. The AI is acting on a representation of the business that includes the exceptions, not just the policies. The semantic structure means the AI can reason coherently across systems instead of guessing. The environment is sovereign, which means the institutional knowledge captured in this work stays inside the company’s own boundary, never feeds a public model’s training signal, and never leaves the building.

That last point matters more in 2026 than it did a year ago. The institutional knowledge captured in this work is the company’s strategic asset. Once it is structured for AI consumption it becomes the most portable, the most valuable, and the most dangerous data the company owns. 

Where it lives decides who can use it, who can leave with it, and who can compete with it.

What this means for the next investment on your desk.

The AI you bought is not what determines whether your investment produced anything. The investment is in the work that comes before the AI. That work has been there since long before the agents showed up. Skipping it has been the default because no vendor was selling it.

Building AI on top of a business that was never made readable to itself is the most expensive way to discover the limits of your own documentation. Building the readability first is what changes the answer.

That is what Mustang was built for. If the question on your desk this quarter is why the last AI investment did not move a single business metric, the next conversation starts there.

Related Insights

The AI Audit: 10 Signs Your AI Investment Is Burning Money

Your company spent six or seven figures on AI tools this year. Licences, infrastructure, training, maybe a consultant to help roll it out. Ask your CFO what that spend produced and you will get one of two answers: a vague reference to “productivity gains” or silence.

How AI Is Transforming Product Development from Idea to Launch

Product development has always been a race against uncertainty — unclear customer needs, shifting markets, inefficient workflows, and fragmentation across teams. Today, those challenges have become more visible and more expensive: misread demand, uncoordinated handoffs, long iteration cycles, and tool sprawl often stall innovation and inflate cost.

Fintech 2025: AI Agents, Real-Time Payments, and the Rise of Vertical Platforms

Enterprise fintech is entering a new phase. What passed for innovation a year ago is now the industry baseline. AI is moving from the edge of the workflow to the center. Payment speed is no longer an advantage—it is the expectation.

Working on something similar?​

We’ve helped teams ship smarter in AI, DevOps, product, and more. Let’s talk.

Stay Ahead of the Curve in Tech & AI!

Actionable insights across AI, DevOps, Product, Security & more