Prism

Train and run inference across distributed, constrained, and sensitive environments. Prism provides sandboxed execution, differential privacy, and data locality enforcement without centralising raw data.

Performance targets

Research-stage performance targets for federated learning workloads.

TBDFederated nodes
TBDAggregation latency
TBDPrivacy budget efficiency
TBDConstrained-device footprint

Key features

Federated training

Train models across distributed datasets without centralising raw data. Gradient aggregation and model updates happen at the coordination layer while data remains local.

Differential privacy

Formal privacy guarantees through calibrated noise injection and privacy budget management. Configurable epsilon and delta parameters per training round and per participant.

Sandboxed execution

All computation runs in isolated sandboxes with strict resource limits. Prevents data exfiltration, side-channel attacks, and resource exhaustion across participant boundaries.

Constrained-device inference

Optimised runtimes for resource-limited devices. Model quantisation, pruning, and adaptive computation enable inference on edge hardware with limited memory and compute.

Data locality enforcement

Data never leaves its origin. Computation travels to the data, not the reverse. Cryptographic verification ensures compliance with locality policies throughout the training lifecycle.

Cross-organisation collaboration

Multiple organisations contribute to shared models without exposing proprietary datasets. Secure aggregation protocols prevent reconstruction of individual contributions.

Prism is in development

Prism is in early research and development. Register your interest to follow progress and contribute to the direction of privacy-first federated learning.

Development updates will be shared with registered participants.