Skip to content
Back to Momenta Insights

Enterprise AI Governance: Part 2

Alistair Fulton
Posted by Alistair Fulton
///
April 16, 2026 | 6 min read
Industiral AI_the_control_layer_emerges

 

The New Control Layer for Industrial AI 

The Reality We Are Moving Into 

In Part 1, we outlined a shift that is already visible in real environments: software is being deployed faster than it can be fully explained, traced, or defended. 

There is no realistic path back. 

Slowing down AI adoption is not an option. The benefits are clear, and the pressure to move faster is real. 

Across the organizations we work with, AI-assisted development is already embedded in daily workflows. Often, it is happening faster than the governance structures meant to oversee it can adapt. 

This tension is not hypothetical. 

AI is being adopted fastest in the very systems where failure is least tolerable. 

What Begins To Change

The response to this shift is not to slow down AI adoption.
It is to make its use visible, accountable, and defensible. 

What we see emerging is not a single tool or policy, but a control layer around AI‑assisted development. One that restores the questions that matter without sacrificing the speed teams now depend on.

This layer does not replace AI.
It constrains it.

 

It makes clear:

  • who initiated a decision
  • what was generated or modified
  • when it entered a system
  • how it was validated
  • and why it was accepted

Without this layer, those answers are fragmented or lost.
With it, they become observable again.

Screenshot 2026-04-13 at 14.38.27

When AI becomes the source of decisions, the questions that once anchored accountability must be reconnected through control, not intent.

 
What starts to Surface

As AI-generated code becomes embedded in real systems, a pattern of risk begins to emerge. 

Not as isolated incidents, but as compounding exposure. 

Intellectual property risk appears. Code that looks original may inherit restrictive licenses, only becoming visible during audits or transactions. 

Data leakage becomes normalized. Internal data is shared with AI tools without it feeling like a breach, until boundaries begin to erode. 

Security risk shifts. The code functions, but system-level interactions become harder to predict and reason about. 

Liability expands. When failures occur, responsibility does not stop with the model. It extends across developers, operators, and enterprises. 

Governance lags behind. Regulation is moving quickly, while many organizations still lack visibility into how AI is used inside their development pipelines. 

Each of these risks can be managed. 

Together, they fundamentally change the nature of the problem. 

 

What We see emerging

From our vantage point, the response is not to reduce the use of AI. It is to build the layer that makes its use observable and controllable. 

What is beginning to take shape is a new control layer around AI-assisted development. 

Not a single tool, but a set of capabilities that restore visibility and accountability: 

  • Systems that track where generated code comes from  

  • Security embedded directly in development workflows  

  • Data protection that understands context, not just static rules  

  • AI bills of materials for models and generated artifacts

  • Audit trails that reconstruct decisions after the fact  

Individually, these address specific risks. 

Together, they make AI-assisted development observable, measurable, and defensible. 

 
What this means in practice

For industrial companies, this shift is already underway. 

AI adoption and operational risk management are now inseparable. 

Moving fast without control is no longer a competitive advantage. 
It becomes an accumulating liability. 

Software assurance becomes operational assurance. 
Code integrity directly affects uptime, safety, and regulatory exposure. 

Traceability becomes a capability. 
The ability to explain how systems were built becomes central to trust. 

Over time, the market will separate. 

Organizations that build these controls early will scale with confidence. 
Those that do not will slow down, whether by choice or by force. 



Closing Thought 

There is a subtle but decisive shift underway. 

For years, success meant building systems that worked and delivered value. 
 

Now, success will be defined by systems that can be understood, audited, and defended, even when parts of them were not written by humans. 

AI has changed how software is created. The next phase will be defined by who can control it. In industrial systems, that is not optional.  

It becomes a condition for deployment. 


orange-break@2x

Momenta is the leading Industrial Impact® venture capital firm, accelerating innovators across energy, manufacturing, smart spaces, and the supply chain. Our team of deep industry operators has helped scale industry leaders and innovators to improve critical industries, the environment, and people's quality of life for over a decade. PitchBook named Momenta among the world's top ten digital industry venture funds for both 2023 and 2024 in its Global Manager Performance Score League Tables, one of just two European-headquartered VCs to achieve a Top 10 ranking. For more information, please visit: momenta.vc