``
Skip to content
Back to Momenta Insights

Enterprise AI Governance: Part 1

Alistair Fulton
Posted by Alistair Fulton
///
April 14, 2026 | 7 min read
Industrial AI is changing how industrial software is built

 

The Hidden Liability in Your Software Stack

The Moment We Are Already In 

It rarely starts as a strategic decision. More often, it starts as a small improvement. A developer pastes a prompt. Code appears. It works. It gets committed. 

Across enterprise and industrial environments, this pattern is now routine. 

AI is no longer a tool used occasionally at the edges of software development. For many teams, it has become the default starting point. Code is suggested, accepted, modified, and shipped as part of everyday workflows. 

And because it works, it spreads. 

Releases accelerate. Teams stay lean. Ideas move from concept to production faster than most organizations ever planned for. 

The gains are real. So is the trade-off. 

We are now deploying software at scale that cannot be fully explained, traced, or defended once it is in production. 

For industrial CTOs, CISOs, and operations leaders, this is not an abstract concern. It is a governance and operational risk that emerges precisely where tolerance for failure is lowest. 

 

This is not just more code

At first, the situation looks familiar. 

More code enters the system. Security signals increase. The surface area expands. That has always been the case. 

What has changed is the assumption underneath it. 

Enterprise software has long relied on the idea that authorship is knowable and intent is traceable. When a human writes code, there is context. Decisions can be questioned. Trade-offs can be explained. Responsibility is visible. 

 

AI Alters that Relationship

When code is generated, adapted, and merged at speed, that chain can break. 

Development velocity has increased dramatically, but the structures that preserve accountability have not kept pace. 

That gap is no longer theoretical. It is showing up inside real systems.

 

Where this becomes real

In consumer software, these gaps can sometimes be absorbed. 

In industrial environments, they cannot. 

  • In energy systems, software decisions affect grid stability and outage response.

  • In manufacturing, they influence production lines, quality control, and worker safety.

  • In robotics, they determine how machines move and interact with humans.

  • In supply chains, they increasingly drive autonomous routing and inventory decisions. 

When something fails in these environments, the consequences are immediate. 

  • Physical harm.

  • Downtime.

  • Financial loss.

  • Regulatory scrutiny.

We have seen systems where AI-generated code passed functional testing, behaved as expected under normal conditions, and still failed under edge scenarios that no one could clearly explain afterward. 

At that point, the question changes. 

It is no longer just: 
“Does the code work?” 

It becomes: 
“Can you prove it was built responsibly?” 

Today, most organizations struggle to answer that question with confidence. 

 

A Pattern that keeps repeating

The pattern is familiar. Teams move faster. AI-assisted development is adopted. Software ships. 

Everything works, until someone asks a simple question. 

  • Where did this come from?

  • Why was it built this way?

  • What happens under failure conditions?

  • Are we exposed from a licensing or regulatory perspective? 

The uncertainty that follows is rarely the result of carelessness. 

It is structural. 

The systems teams now rely on no longer preserve that chain of understanding.The software works. But it cannot be reconstructed or defended with confidence. 

In industrial environments, that is where risk truly begins. 

Screenshot 2026-04-13 at 14.27.20


When AI becomes the source of decisions, the questions that once anchored accountability are no longer reliably connected to human intent.
 
Closing Thought 

For years, the primary challenge in software was building systems that worked. 

That is no longer the constraint. 

AI has made software faster to create. It has also made it easier to lose control over how critical systems are built. That trade-off is already in production. And it is changing the rules. 

In industrial environments, the question is no longer whether systems work. 

It is whether they can be trusted, explained, and defended when something goes wrong. Most cannot. 

That is where the next layer of control begins to emerge. 
And it will define what can be deployed going forward. 

 

Part 2: 'The New Control Layer for Industrial AI', to follow.


orange-break@2x

Momenta is the leading Industrial Impact® venture capital firm, accelerating innovators across energy, manufacturing, smart spaces, and the supply chain. Our team of deep industry operators has helped scale industry leaders and innovators to improve critical industries, the environment, and people's quality of life for over a decade. PitchBook named Momenta among the world's top ten digital industry venture funds for both 2023 and 2024 in its Global Manager Performance Score League Tables, one of just two European-headquartered VCs to achieve a Top 10 ranking. For more information, please visit: momenta.vc