Confidential computing has matured. Cloud providers offer confidential VMs and container-level isolation. Enterprise platforms make it straightforward to deploy applications inside TEEs across clouds. Attestation has moved well beyond just proving the hardware is genuine — products now attest workload images, runtime policies, and configuration state.
But the basic model hasn't changed. Confidential computing is still something you add to existing infrastructure. Most offerings are designed around existing infrastructure boundaries, and that works well when the problem fits inside those boundaries.
It gets harder when you need a single trust domain that spans multiple machines, infrastructure providers, and organizations — and you need external parties to be able to verify what's running without adopting your provider's tooling. That's the problem Super Swarm was built around.
These three capabilities define what's architecturally specific to Super Swarm.
If you deploy confidential workloads on AWS today, you get a trust boundary inside AWS. If you also run on Azure, you get a separate one there. Multi-cloud platforms let you deploy the same software to both, but each deployment lives inside its own provider's environment.
Super Swarm works differently. Nodes joining the cluster go through mutual hardware attestation with the nodes already present. The process is automatic, with no manual configuration and no external service coordinating who's allowed in. Once verified, a node on bare metal in Frankfurt and a node on GCP in Virginia operate under the same trust model. AWS, Azure, OVHcloud, on-premise servers — the attestation mechanism is the same regardless of where the hardware sits.
That means an organization running on its own hardware today doesn't need a separate security architecture when it adds cloud capacity tomorrow. The fabric grows, the trust model doesn't change.
Attestation across the market has evolved. Products now attest container images, runtime policies, platform binaries, and in some cases configuration state. How that evidence is structured, how much it covers, and how external parties can access it varies considerably.
Super Swarm's approach is Deploy Evidence. When you publish an application, the system produces a signed proof describing what's deployed: code, configuration, image digests, resource allocations — everything that shapes the runtime. The proof is formatted as a JWS document and accessible via API using the application's hostname.
Three things make this specific to Super Swarm. The evidence covers the deployed state that influences execution, not just individual components. It's tied directly to the service's TLS identity and hostname. And it's generated as part of a publish/unpublish lifecycle — the execution environment locks at the moment of publication. If you need to change anything, you unpublish, modify, and republish with fresh evidence.
Verification in confidential computing works in several ways. The verifying party might need to use a cloud provider's attestation service, or check exportable evidence with open-source tools, or rely on RA-TLS to verify the TEE during the handshake.
Super Swarm takes a simpler path. Deploy Evidence is retrievable by hostname over a standard HTTPS connection — you fetch it, verify the signature, and see the runtime configuration. There's nothing to install on the verifying side. For day-to-day use, adding the Swarm's TLS certificate to your trusted authorities enables normal HTTPS communication with the service.
This matters most in multi-party scenarios. When the verifying parties don't share infrastructure, tooling, or a vendor relationship with whoever is running the service, the friction of verification becomes the bottleneck. Super Swarm keeps that friction close to zero for everyone.
These capabilities exist elsewhere in various forms. Super Swarm's implementations are architecturally different.
Most confidential computing verification chains involve an external trust anchor somewhere, whether that's a cloud attestation service, an external certificate authority, or a vendor's key management system. These are real dependencies. The verifying party has to evaluate and trust each one.
Super Swarm's PKI is rooted inside the TEE. The root key originates within the confidential environment at startup and stays there — no external party ever holds a copy. Everything from node and service identity to certificate issuance happens inside the trust boundary. The internal identity and certificate chain closes within the confidential domain, without depending on an external service to anchor it.
When an application inside Super Swarm needs access to a protected resource, the Secure Data Gateway looks up the application's Deploy Evidence using its certificate fingerprint, confirms the signature is valid, and evaluates the data owner's access policy against it. Access is proxied only after the check passes.
Attestation-gated access exists across the market — cloud providers and enterprise platforms offer policy-based key release tied to attestation claims. The difference here is that the policy check runs against the same Deploy Evidence that any third party can retrieve and verify independently. The data owner doesn't rely on a platform's internal attestation flow. They check the execution environment using the same proof that's available to everyone, and each data owner checks on their own.
Super Swarm runs on standard Kubernetes and deploys as containers. In practical terms, it's a confidential execution layer you install on your own infrastructure, extend into public cloud, or run as a hybrid. The security model stays the same regardless of where the hardware sits.
This also opens a different kind of use case. Infrastructure providers — a regional cloud, an enterprise IT organization, a managed services company — can deploy Super Swarm on their own hardware and offer verifiable confidential computing to their customers, complete with Deploy Evidence their tenants' partners and customers can verify independently.