Confidential Computing, Part 2: The Technical Bits

Part 1 of this series on Confidential Computing introduced the basic concepts and benefits of this emerging architecture for cloud computing. In this segment, we’ll dive deeper into the inner workings of this architecture and take a peek at some of the implementation challenges.  

Confidential Computing aims to significantly change how data security in cloud computing is done. When correctly deployed in a private cloud setting, Confidential Computing can prevent accidental data leaks and protect critical key material in new, novel ways. It safeguards against unintended malware introduced by third party applications as well as malicious acts – flawed software purposely introduced by compromised insiders. As a result, even for dedicated facilities, Confidential Computing practices offer strong protection for key managers and identity management systems. Additionally, it provides secure container management with hands-free protection of individual container keys and data.  

In a public or multi-cloud setting, these same benefits prevail, but they extend to third party environments. With Confidential Computing, multi-cloud security assurance is technically grounded and doesn’t need to rely on untainted software, good will and flawless execution by the cloud facility staff.  

Technical foundations  

A robust Confidential Computing environment requires a platform that provides a trusted execution environment for programs and specifically designed, protected programs. This trusted execution environment must include isolation, program identity, secure key management and a critical trust mechanism called attestation. Attestation provides remote verification of security properties.  

A well-written program can leverage Confidential Computing primitives to:  

  • Protect secrets 
  • Restrict sensitive communications to other verified Confidential Computing programs 
  • Encrypt data in transmission in use and in storage   

The underlying Confidential Computing platform hardware for this software provides principled mechanisms that enable a protected program to safeguard its secrets, processing and data.  

Together, the platform and program ensure that private data can be tightly controlled and that it is never exposed in an unencrypted form – even when data is in use – except to programs that have been expressly authorized to access that data.  

Platform capability requirements 

A robust Confidential Computing platform provides four essential capabilities:  

  1. Isolation: The ability of a platform to load a designated program (application, enclave or virtual machine) into memory and prevent any other software on that computer from modifying or reading the program code or data, including registers and busses exposed to other bus masters on the computer.  
  2. Measurement: The ability of a platform, once a program has been isolated, to measure the entire program image (including initialized data). The system takes a cryptographic hash of the program code and data along with any boot parameters that may affect program behavior. This measurement is the same on any machine and is unforgeable. Changing a single bit of code or data changes the measurement in a way that is computationally infeasible to spoof. The measurement serves as a universal identifier for the program.  
  3. Secret storage: Once a program is isolated, the platform can, at the request of the program, accept secrets (typically cryptographic keys) and store them in a way that allows them to be retrieved only by a program with the same measurement on the same machine when it is isolated. This capability, called sealing, requires hardware encryption keys to encrypt and integrity protect the measurement of the requesting program and the secret offered for protection, returning the resulting encrypted blob. To recover the secret (unsealing), the program hands the blob back to the platform for decryption and verification. Once verified, the platform returns the encapsulated secret(s) to the program if the measurement in the blob matches the measurement of the running program.  
  4. Attestation: This mechanism allows a program to establish a trust relationship with another program over an insecure communications channel. An attestation-capable platform accepts a statement, called “what the program says,” from the program and signs the statement, using a private key known only by the platform. The signed statement (also known as an attestation), the measurement, platform details and “what was said” are necessary to establish a trust relationship. Any party can rely on this signed statement. It’s a guarantee that the isolated program with the indicated measurement and on the indicated platform supplied the “what was said.” A program typically uses this to name a public key (whose private key is known only to the isolated, measured program), which can be used to authenticate the identified program. This key can be used, for example, to open a mutually authenticated, encrypted, integrity-protected channel between two certified programs.  

For Confidential Computing to function, the program must employ Confidential Computing practices and have access to cryptographic quality random numbers, I/O mechanisms (to transmit and receive data from outside the isolation boundary) and customary thread and thread synchronization primitives.

Most people understand how isolation and secrets contribute to secure computing. Measurement and attestation are less well understood. In concert, measurement and attestation solve the problem of how one can establish trust in both a remote hardware platform and the software running in that platform. The notion of trust here does not refer to the intentions of software authors; rather, trust refers to the identity of the software that is running on the system and the associated guarantees that the software is isolated, has not been tampered with, and has the verified ability to protect the data it processed in the face of the strong threat model mentioned above (i.e., protection from malware and insider attacks).  

In Confidential Computing, trust negotiation establishes whether the components of a larger system conform to the desired security requirements. Trust negotiation begins with a set of claims. Each claim is signed by a key and hence can be verified. Confidential Computing adds the attestation claim mentioned above. Upon receipt of a set of signed claims, a verification procedure examines the submission and compares it against policy to determine whether the submitting entity should be trusted. The policy, created by a deploying party, defines trusted measurements and the hardware and specifies the permissions earned by verified programs. Once this procedure is completed and the claim is verified, the recipient knows that:  

  • Any statement signed by the public key can only come from the indicated program.  
  • The program has not been modified and no other software on the platform can read or write in its address space.  
  • The program is isolated.  
  • The program is trusted under the security policy.  
  • Secure communications protected using protocols (like TLS) employing the indicated public key are confidential and integrity protected.  

In our discussion, the definition of the program is left vague because its definition depends on the platform. The program could be an application enclave (as in SGX), which consists of isolated ring 3 code, or an entire encrypted virtual machine, or an application within an encrypted virtual machine that enjoys the Confidential Computing primitive capability.  

Enabling new workloads and use cases 

Confidential Computing supports a new class of privacy-preserving data economy workloads. These workloads require principled security when a program runs on a computer which is not in the physical control of the data provider who must rely on the capabilities of confidential computing to provide both security and granular control over the purposes for which his data can be used. The data economy refers to the practice of deriving value and insight from datasets that are combined from multiple sources, ideally without exposing the private details of those datasets. In data economy workloads, the ability to measure and attest programs implies that sensitive data from many applications can be processed under rules established by each data owner. The attested program can be inspected to determine whether each data owner is assured that their privacy requirements will be strictly enforced.   

Sovereign clouds anywhere and everywhere

And, of course, Confidential Computing allows an organization to elastically provide secure distributed service (caching, key management, auditing) in a vast network of machines owned and operated by many parties – a multi-cloud architecture. Confidential Computing can also be employed to meet geographic and governmental data privacy mandates by building technically grounded sovereign cloud environments instead of geographically constrained cloud environments.  

Case closed: Confidential Computing provides next-level data security 

The value and potential of Confidential Computing is clear. But having a technology is not the same as having frameworks and tools that let you use it easily and safely. In the next installment, we’ll describe the nuts and bolts of these important technologies and how the newly released open source Certifier Framework helps you write (or convert) applications quickly and safely as well as manage scalable deployment of these applications.  

Stay tuned to the Open Source Blog and follow us on Twitter for more deep dives into the world of open source contributing.

Source

Originally posted on December 6, 2022 @ 6:59 pm