Announcement7 min read

Introducing Rai: responsible AI infrastructure, from edge to enterprise

Rai is governance-first AI infrastructure built to close the gap between AI adoption and meaningful oversight.

Artificial intelligence is being deployed faster than organisations can govern it. Models are integrated into production systems, agents are given access to sensitive data, and inference runs across jurisdictions with different regulatory requirements. The tooling to manage all of this has not kept pace. Most governance today is retrofitted: compliance checklists layered on top of systems that were never designed to be governed. That gap between capability and control is where real risk lives.

Rai exists to close that gap. Built by Responsible Engineering Ab in Finland, with open-source stewardship from Omnifi Foundation in Finland, Rai is infrastructure for teams who need to deploy AI responsibly without sacrificing performance or operational control.

The governance gap

The challenge is not that organisations lack policies. Most regulated industries have extensive frameworks covering data handling, model risk, and algorithmic accountability. The challenge is that these policies have no reliable enforcement point in the AI stack itself.

Consider a typical enterprise AI deployment. Models from multiple providers are accessed through APIs. Agents make decisions based on data from several internal systems. Inference happens at the edge, in the cloud, or both. Policies about which data can reach which model, which agents can take which actions, and which regions can process which requests must be enforced consistently across all of these environments.

Today, that enforcement is largely manual or handled by bespoke middleware that drifts out of alignment with policy as systems evolve. Governance becomes a bottleneck not because it is inherently slow, but because it was never given a proper place in the architecture.

A different starting point

Rai takes the position that governance is infrastructure. It is not a reporting layer, an audit trail bolted on after the fact, or a dashboard that shows what already happened. Governance decisions -- which requests to allow, how to route data, what constraints to apply to agent behaviour -- need to happen inline, at the speed of the systems they govern.

This led to three architectural commitments that shape everything Rai builds:

Governance is a first-class runtime concern. Policy evaluation happens in the request path, not in a sidecar or post-hoc analysis pipeline. If a request violates a governance rule, it is handled before it reaches a model or agent, not flagged after the response is already generated.

Performance and governance are not in tension. The overhead of governance enforcement should be negligible. Teams should not face a choice between deploying governance and meeting their latency requirements. This is an engineering problem, and it has engineering solutions.

Open source is non-negotiable for trust. Governance infrastructure that cannot be inspected is a contradiction. If the system enforcing your AI policies is itself a black box, you have not solved the governance problem -- you have moved it.

The product suite

Rai's infrastructure is composed of three products at different stages of maturity, each addressing a distinct part of the AI governance challenge.

Rai Shield

Rai Shield is a high-performance AI governance layer with an integrated AI gateway. It is currently in early access.

Shield sits in the request path between the teams and systems consuming AI capabilities and the models, agents, and data sources providing them. It routes, governs, and protects every interaction. Governance policies are evaluated inline, with decisions enforced before requests reach their destination.

The technical profile reflects the performance commitment. Shield is built with Rust and WebAssembly, delivering over 300,000 requests per second with sub-millisecond latency overhead in a binary under 10 megabytes. It deploys anywhere: edge locations, Kubernetes clusters, bare metal servers, or serverless environments.

That deployment flexibility matters because AI workloads are not confined to a single environment. A governance layer that only works in one deployment model forces teams to choose between where they need to run and whether they can govern what runs there. Shield does not impose that trade-off.

The choice of Rust and WebAssembly is deliberate. Rust provides memory safety without garbage collection pauses, which is critical for a system that sits in the hot path of every AI request. WebAssembly enables portable policy execution, allowing governance rules to be compiled once and enforced consistently across heterogeneous infrastructure.

Shield is licensed under LGPL-3.0 and MPL-2.0, ensuring that the governance layer itself remains open and auditable while allowing integration with proprietary systems.

Arai

Arai is an intelligent agent orchestration platform, currently in the research phase.

As AI agents grow more capable, the orchestration of multi-agent workflows becomes a governance surface in its own right. Which agents can communicate with each other, what data they can share, what actions they can take autonomously, and when human oversight is required are all governance questions that need systematic answers.

Arai is being designed to provide those answers as part of the orchestration itself, rather than as external constraints that agents might circumvent or that operators must enforce manually.

Prism

Prism is a privacy-first federated learning platform, also in the research phase.

Federated learning allows models to be trained and inference to be performed without centralising sensitive data. This is particularly relevant for organisations operating under strict data sovereignty requirements, where data cannot leave specific jurisdictions or environments.

Prism is exploring how to make federated approaches practical in constrained environments -- limited compute, limited bandwidth, strict residency requirements -- where traditional centralised training and inference pipelines are not viable.

The trust paradox

There is a fundamental tension in AI governance that most commercial offerings ignore: the tools governing AI systems require at least as much trust as the AI systems themselves.

If an organisation deploys a proprietary governance layer, it is trusting that the governance tool faithfully enforces the policies it claims to enforce. It is trusting that the tool does not exfiltrate the data it inspects. It is trusting that policy evaluation behaves as documented, even in edge cases. And it has no way to verify any of this.

This is the trust paradox. You cannot meaningfully govern AI with tools you cannot inspect.

Open source resolves this directly. When the governance layer is open, organisations can audit the enforcement logic. Security teams can verify that data handling meets their requirements. Compliance teams can confirm that policy evaluation matches regulatory expectations. Engineers can extend and adapt the system to their specific needs without waiting for a vendor roadmap.

Rai's open-source commitment is not a distribution strategy. It is a direct consequence of taking governance seriously. The source is available at git.rai.onl, and contributions are stewarded by Omnifi Foundation to ensure long-term community governance of the project itself.

Data sovereignty as a design constraint

For many organisations, particularly those in regulated industries, healthcare, finance, and the public sector, data sovereignty is not a feature request. It is a hard constraint.

AI governance infrastructure must respect these constraints at every level. Policy evaluation must happen where the data is, not in a remote SaaS environment. Governance decisions must be enforceable without sending request data to third parties. The entire system must be deployable on infrastructure the organisation controls.

Rai is built for self-hosted deployment from the ground up. There is no cloud dependency, no telemetry that cannot be disabled, and no component that requires connectivity to an external service to function. Organisations retain full control over their governance infrastructure and the data it processes.

What comes next

Rai Shield is available in early access today. Teams working with AI in environments where governance, performance, and data sovereignty matter are encouraged to explore the project and provide feedback.

The research work on Arai and Prism continues, informed by real-world requirements from organisations navigating the practical challenges of responsible AI deployment.

The broader ambition is straightforward: AI infrastructure should be governable by default, not by exception. The organisations deploying AI should have the tools to enforce their own policies, inspect how enforcement works, and maintain sovereignty over their data and decisions.

The code is at git.rai.onl. The work is open. The conversation starts now.