Today's cloud security comes down to one thing: trust the provider. You can audit policies, review compliance reports, check certifications – but you cannot independently verify what's actually running on the other end of a TLS connection.
Super Swarm removes that dependency.
When an application is published through Super Swarm, its TLS identity and hostname become cryptographically bound to a specific runtime configuration inside hardware-attested Confidential VMs. That binding is externally verifiable. Any client can check the proof before sending a single byte of data.
Instead of granting access based on pre-shared secrets, network location, or institutional trust, services make access decisions based on verified runtime evidence — what code is running, in what environment, with what configuration.
And this isn't one confidential machine. Super Swarm is a distributed control plane that spans multiple nodes and infrastructure providers, creating a single hardware-attested trust domain. The foundation is confidential computing — Trusted Execution Environments (TEEs) that keep data encrypted in memory and inaccessible to the infrastructure operator. Networking, orchestration, identity, workload execution — all of it runs inside TEE-protected memory. Operators and infrastructure providers never see the data.
Confidentiality is hardware-enforced and cryptographically proven. Trust starts at attested execution and propagates through a closed internal PKI — root key generated inside the TEE, never leaves.
The end result: heterogeneous infrastructure — on-premise servers, public clouds, or any combination of both — behaves as one cryptographically unified environment. An organization can start on its own hardware and seamlessly scale into the public cloud when it needs to, with the same security guarantees across the entire fabric. And every service running inside it can prove exactly how it's running.
Everything inside the Super Swarm boundary runs inside TEE-protected memory, distributed across a cluster of Confidential VMs. Every component inside the boundary operates within hardware isolation.
The system is made up of the following:
HTTPS Attested API Gateway (Secure Ingress) — The public entry point. Applications are exposed over HTTPS, and connection-level trust is anchored through attested TLS certificates.
Secure Data Gateway — A controlled data access layer between workloads and protected external resources. It mediates connections and enforces access policies.
Confidential Kubernetes Cluster — The orchestration layer. It schedules and manages workloads across Confidential VM nodes.
Confidential AI Agents — Containerized workloads — inference, training, automation, analytics — executed within the confidential runtime with full TEE protection and policy enforcement.
Confidential Storage — Encrypted storage for models, datasets, checkpoints, and intermediate artifacts. Key management is confined to the TEE. Includes sealed storage and data classification controls.
Certification Authority (Internal CA) — The internal trust anchor. It establishes node and service identity and forms the closed PKI of the Swarm.
Confidential VM Runtime Layer (×N) — The cluster of hardware-protected VMs providing attested execution.
Outside the trust boundary, the Identity, Access & Governance layer integrates with corporate identity providers (Okta, Azure AD), SIEM platforms (Splunk, Sentinel), and compliance dashboards covering SOC2, HIPAA, and GDPR. It provides operational visibility but does not participate in confidential workload execution — it never sees the data.

The foundation is Confidential VMs powered by AMD SEV-SNP or Intel TDX. The architecture is infrastructure-agnostic. It can run on GCP (including NVIDIA Confidential Computing GPUs for AI workloads), AWS (EC2), Azure (ACC), OVHcloud, or any other cloud or enterprise-owned servers with compatible processors.
That means enterprise clients can deploy the cloud inside their own infrastructure to meet data residency requirements — or run a hybrid model, splitting workloads between their own hardware and cloud providers, all within a single shared trust space.
At startup, every VM goes through remote attestation. It presents a hardware report to the built-in Certification Authority, which verifies the integrity of the environment and issues an attested certificate. Together, these attested nodes form a unified execution fabric — every node cryptographically verifies the others before participating in cluster operations.
The chain of certificates creates a distributed, hardware-anchored trust domain. That's the Super Swarm Confidential Verifiable Cloud: all nodes confirm confidentiality to each other, regardless of where they physically sit.
Inside the attested VMs, Swarm deploys infrastructure components — Kubernetes/RKE2, object storage, container registry, and supporting distributed services. From the user's perspective, the experience is familiar: you get a ClusterSpace (an isolated namespace with allocated resources) and deploy applications using standard Kubernetes tools.
Applications run as containers within the Confidential Kubernetes Cluster. All computation happens inside Confidential VMs. Nodes are managed exclusively through automated orchestration mechanisms inside the distributed control plane.
Applications have two states: unpublished and published.
Unpublished means the application is isolated. No inbound access, restricted outbound connections. Developers can freely modify configuration — it's a workspace.
When an application is published, the execution environment locks down:
From that point, any external client can retrieve the Deploy Evidence by hostname, verify the Swarm signature, and confirm that the application runs inside a confidential environment with a specific, known configuration. For day-to-day use, adding the TLS certificate to trusted authorities enables standard HTTPS communication — no special client needed.
If the application is returned to the unpublished state, routes are removed, certificates are revoked, and configuration changes become available again.
When an application needs access to sensitive data — private storage, model registries, private APIs — the Secure Data Gateway sits in front of the resource.
The flow is straightforward: the application connects via mTLS, presenting its client certificate. The gateway retrieves the Deploy Evidence using the certificate fingerprint, verifies the signature and checks policy compliance. Only then does it proxy the request.
Data owners control access based on trust in the execution environment — not by distributing secrets, not by integrating with specialized APIs, not by hoping the operator follows the rules. The proof is cryptographic.
In practice, the resources behind the gateway fall into two broad categories.
Internal data sources — on-prem databases, cloud object storage (S3, Azure Blob, GCS), ERP/CRM systems (SAP, Salesforce), and data lakes. These are the assets an organization already has but needs to expose to workloads running inside the confidential environment without giving the infrastructure operator access.
External partner data sources — data, models, or services owned by third parties. Model registries (HuggingFace, private registries), training pipelines, fine-tuning endpoints, or partner datasets shared under strict access conditions. The gateway lets external owners verify that their assets are being consumed inside a TEE-protected environment with a known configuration before granting access.
AI agents deploy into ClusterSpace as standard containers. The platform works with orchestration frameworks like n8n, OpenClaw, and LangChain/LangGraph without architectural modification. Confidentiality is enforced at the infrastructure layer, not the application layer — agents don't need to be rewritten to be secure.
What agents get out of the box: TEE-protected inference runtime, model encryption at rest and in transit, secure model loading, and RBAC/policy enforcement.
Access to sensitive data and models uses the same gateway mechanism described above. Resource owners define policies based on Deploy Evidence. This means a single agent can securely interact with data from multiple organizations — as long as its execution environment satisfies each owner's requirements.
The interaction model follows minimal disclosure. Confidential data is processed inside the TEE. Only aggregated outputs, analytical reports, or decisions leave the environment. Agents can also reach out to public APIs to enrich context, and receiving parties can verify agent identity via Deploy Evidence.
The security foundation is hardware isolation combined with a closed internal PKI.
All computation runs inside Confidential VMs. Nodes are managed exclusively through automated orchestration inside the distributed control plane.
Trust is anchored in hardware and propagated through the internal certificate chain across the entire execution fabric. The PKI is entirely self-contained — rooted inside the TEE, with no external dependencies.
This removes reliance on personnel or external systems for confidentiality guarantees. No one needs to be trusted — the hardware enforces isolation, and the cryptographic trust chain operates entirely within the confidential domain. Integrity and authenticity are structural properties of the system, not operational commitments.
Cloud computing has always required a leap of faith. You hand over your data, your models, your logic — and trust that the provider's policies, personnel, and perimeter defenses will keep them safe.
Super Swarm replaces that leap of faith with a verifiable claim. Every service can prove its execution state. Every data owner can check that proof before granting access. Every node in the fabric confirms its confidentiality to every other node, regardless of who owns the hardware underneath.
This isn't a marginal improvement to cloud security. It's a different foundation — one where trust is earned cryptographically, not assumed contractually. And it's built to run across clouds, across on-premise infrastructure, across organizations, and across borders, without asking anyone to take anyone else's word for it.