Advanced use cases for attestation – a technical introduction

Advanced use cases for attestation – a technical introduction

This article explores advanced use cases for attestation in Confidential Computing, demonstrating how this technology can be leveraged for complex scenarios including AI, multi-stakeholder systems, and process stages.

Mike Bursell

technical

Introduction

This article provides a technical introduction to some advanced use cases for attestation. Super Protocol is a provider of services that use Confidential Computing and a strong proponent of the importance of attestation: all of Super Protocol's services employ attestation to provide users and other stakeholders with strong cryptographic assurances around the security of various aspects of the services. This article, second in a series of three, builds on the first Attestation and Confidential Computing – a Technical Introduction, which introduces attestation from a technical point of view, and is worth reading before you read this one, examines some advanced use cases for the techniques explained in the first article. The third details how Super Protocol's solution uses various techniques to allow simple and efficient security for all its stakeholders. You can find a quick recap of material in the first article below.

Brief attestation recap

Attestation, in the context of Confidential Computing, provides cryptographic assurances about applications and data running in hardware-based Trusted Execution Environments. The attestation process revolves around two steps:

  • The creation of an attestation measurement by the silicon providing the TEE;
  • Verification of the attestation measurement by a verifying party, creating an attestation verification.

In the first article, we looked at how the attestation verification can be used as a cryptographic certificate, with the verifying party acting as a Certificate Authority (CA), and the importance of being able to trust the CA in this context. We also looked at how certificates based on attestation verification can be used not just for securing transport communications (e.g. for TLS or IPSEC), but also for other purposes such as integrity assurances and provenance of data and workloads. Let's move on to more advanced uses of these special certificates.

Taking certificates to the next level

We tend to think of certificates in the standard "client-server" world, with a webserver providing a certificate to a browser, that (hopefully!) checks it and then decides whether to continue with creating a secured connection to the webserver. While multiple clients may separately consume the same certificate, each consumption of a certificate is still a one-to-one interaction between the client and the server. It's now time to think about changing that assumption and what happens when we look past this model.

Sharing attestation verification

First, we need to go back to thinking about how certificates based on attestation verification are typically used and consumed within Confidential Computing. The "owner" of the application - the one deploying it - is the first and most obvious recipient to benefit from the assurances that attestation provides. They can be happy that the application that has been loaded into the TEE, and any other data that has been measured as part of the attestation process, is what they think. But why should we stop here? Any stakeholder with the ability to verify the attestation measurement stakeholder can also share these assurances. This means that the certificate can be shared with anyone who wants to know that the output of a workload (in this case, the application and data deployed at the time that the attestation measurement was created) originated from that workload, and that it was protected both from changes during execution (integrity protection) and also that the owner of the system executing it could not see the workload during execution, where relevant (confidentiality protection).

These assurances around output are extremely useful building blocks, but there's actually a stage before full execution of the workload where we can also make use of these assurances. Let's say that we create a TEE and load an application into it. This application might not be the entire workload that we want to execute, but an application loader. We then arrange for an attestation measurement to be created, and then verify it, creating a certificate as we've already described. In this case, we've not actually started executing our workload, but we have a certificate: what can we use it for?

The answer is that we can share such a certificate with stakeholders who are considering loading their application(s) and/or data into the TEE. This allows assurances that any sensitive applications or data being loaded for execution are only going into an environment that is appropriately trusted. We could actually do this in several steps - with attestation verifications in between, if we wish - checking that each stage before has been correctly loaded and verified. And, of course, this doesn't just apply to the owner, but anyone supplying inputs to it, and any receiving outputs from it as well.

Managing Complexity

The major issue with multi-step verification is the increasing complexity: managing many attestation measurements and complex verification decisions (e.g., tracking acceptable combinations of applications and workloads) becomes challenging. The good news is that Super Protocol's platform addresses these issues, offering a simpler approach that provides strong cryptographic assurances with manageable deployment.

Let's look at some of the use cases that are enabled by these techniques.

AI

Securing AI with Attestation

I'm not sure if you've noticed (you really have), but you can't move for AI in the news these days. While there's a lot of hype, there's also a lot to be excited about in the field, but one of the questions that comes up a lot when you're talking to or otherwise querying an AI agent or engine is: "how can I be sure what I'm talking to?"

When you're talking to an actual person, we might want to do some checks on them before having a conversation with them and expecting expert answers, but the same is remarkably difficult to do with AI. This is because the application - the agent - with which you're interacting is largely the product of its previous history (very much like a human in this regard).

AI Provenance Example: Shakespeare Bot

When we interact with an AI agent, we're talking to an inference engine. This is the product of:

  • An initial application: a training model.
  • Being trained on one or more data sets.

Imagine an AI designed to identify Shakespearean text. We'd set up a training model and feed it Shakespeare's plays as data sets. The resulting inference model should correctly identify a Shakespeare sonnet.

But what if:

  • The training data was Marlowe's plays instead?
  • The training model was designed for aircraft manuals, not literature?

In either case, the answer ("no") would be misleading if we believe the training was correct. We lack provenance information for the inference engine.

The importance of provenance is more extreme if you consider some of the other uses to which AI is being put. In the example of aircraft technical manuals, what if the data sets were for an Airbus A321neo, rather than a Boeing 737 Max 9? Or if a bank is using AI to decide whether to invest in a company and somebody substitutes an alternative risk model into the training engine, leading to results that would benefit a malicious third party, rather than the bank? While the Shakespeare example may not be too concerning for most people (though who wouldn't like to discover a new Shakespeare sonnet?), there are many others where provenance really matters.

Attestation as a Solution

All of these AI provenance problems can be addressed by adding verifiable provenance data at various stages of the process. Certificates based on attestation verification provide a perfect technology to achieve this.

More use cases - adding time and stakeholders

AI, though top of mind for many, is far from the only use case for certificates based on attestation verification. We can think of two types of context where these certificates can be particularly useful:

  • Systems with multiple process stages where the trust needs to be reestablished or built on over time.
  • Systems involving multiple stakeholders with weak or non-existent trust relationships.

Many use cases will combine these contexts.

Multiple process stages

We can think of the AI use case we've already looked at as an example of the first of these contexts. There are multiple process stages - loading of the training model; addition of data sets; creation of inference engine; querying of inference engine and corresponding responses. In order to build trust through the process, we may choose to "layer" attestations by adding certificates derived from previous attestation verifications as part of a certificate chain with more recent attestation verifications. This allows them to be linked together, building a chain of trust through the various parts of the process. In other use cases, it may be sufficient to create an entirely new certificate, excluding previous certificates to meet the requirements of the trust models within the system.

Multiple stakeholders

A good example of the second type of context is Web3 applications (dapps). When deploying dapps, there are typically multiple stakeholders with weak or non-existent trust relationships.

Typical Web3 Stakeholders

  • The owner/developer of the dapp code
  • The owner(s) of any data sets to be added to the dapp
  • The owner of the hardware on which the dapp will execute
  • The owner of any storage to be used by the dapp
  • The owner of any network resources to be used by the dapp
  • The owner of any oracles to be consulted during the execution of the dapp
  • The person or entity paying for the dapp to be run

Note: Two or more of these roles may be combined, but pre-existing trust relationships are not guaranteed.

The Super Protocol platform provides mechanisms to allow these parties to come together using certificates based on attestation verification to create sufficient and appropriate levels of trust to allow the execution of the dapp, delivery of results and payment.

Multiple stages, multiple stakeholders

There will always also be use cases where there are multiple process stages and multiple stakeholders. One very good example of this is software or hardware supply chains. Supply chains are, by their very nature, spread across time: multiple components (software, hardware or a combination) are added through a process which outputs a finished product at the end. These components are likely to be sourced by a variety of entities, and while some of them will have fairly close trust relationships - a steel mill to a girder manufacturer, for instance - some of them may not, and may even be competitors. Using Trusted Execution Environments to process information at each stage - to create SBOMs (software bills of material) or software-based BOMs (bills of materials) for physical components - allows attestation verification to be used at each stage. This in turn allows the various parties to have strong assurances that their information is not being misused, and for the steps in the process to be chained together, providing a chain of trust both across time and the across the various parties.

Summary

Super Protocol's services revolve around Confidential Computing, and make the most of attestation, a process which, as we have seen in this article, can be applied in some very complex use cases, providing great benefits to a wide variety of stakeholders.

Next Steps

In the next article in this series, How Super Protocol Uses Attestation, we will examine how Super Protocol's approach leverages the capabilities associated with TEEs and attestation to simplify and maximise these benefits for real-world applications.