The problem
A company (the AI Consumer) wants to use a third-party model — for example, a bank using an external fraud-detection AI, or a retailer using a marketing analytics model.
But neither side can fully trust the other:
- The AI owner doesn’t want to expose proprietary model weights or code (their core IP).
- The consumer doesn’t want to share sensitive customer data or transaction details with the vendor.
Traditional cloud deployment forces one side to give something up — either the model leaves its safe environment, or the client’s data does.
How it works with Super Protocol
Super provides a neutral confidential execution zone, based on hardware-protected enclaves (TEEs), where the AI model and client data can interact securely — with cryptographic proof of privacy.
- The AI owner uploads the encrypted model to a verified TEE environment.
- The consumer sends their input data into that same enclave.
- The model runs inside the enclave — data stays sealed, model weights remain invisible.
- The enclave outputs only the results (predictions, scores, analytics) — never raw data or model internals.
- Both parties receive attestation proofs confirming what code ran, where, and that confidentiality was enforced end-to-end.
The outcome
- The AI owner can safely license or monetize their model without risk of IP theft.
- The consumer can use AI on private data while staying compliant and protected.
- Both sides can audit the process through cryptographic verification.