Trusted Execution Environment

From MgmtWiki
Jump to: navigation, search

Full Title or Meme

For any digital Entity to make trustworthy statements, it must have Hardware Protection of cryptographic means to sign any statement that it issues and evaluate any command that it receives.

Context

Early History

The origin of computers built with any sort of trusted execution is an IBM 7094 with two separate memory banks with a switch that was designed and built at MIT to support CTSS, the Compatible Time Sharing System. One bank ran kernel code and the other user code, although those were not the terms used at that time. The following quote describes the changes made for MIT.[1]

  • The hardware RPQs that made CTSS run on MIT's 7094s added an interval timer (B/M 570220) and memory boundary and relocation registers (RPQ E007291) to the 7094 processor. In addition, the MIT machines had two 32K core [36 bit word] memory banks instead of one (RPQ E02120). Core bank A held the CTSS supervisor, and B Core was used for user programs. (RPQ stood for Request Price Quotation. IBM, in those days, would engineer and sell features not part of the standard product, for a price.) More importantly, the RPQ made the 7094 a two-mode machine. When the machine was running a program in B-core, many instructions were forbidden: executing them caused the machine to trap and take its next instruction from A-core. In particular, I/O instructions were forbidden, as well as attempts to switch core banks.

The following quote is a description of the method to switch between user (slave) and kernel (master) modes in the GE 645.[2] Which was designed by MIT & GE specifically to take over operation of the CTSS. It was later replaced with the Honeywell 6180 that had modest success running the Multics operating system.

  • Because it was felt desirable to make it possible to branch easily between various programs including between slave and master programs, a certain degree of insurance has to be built into the hardware to guarantee that spurious branches would not take place into the middle of master mode programs from slave programs. As a consequence, a master mode procedure when viewed from a slave mode procedure appears to be a segment which can neither be written nor read. Further, the only method of addressing this segment that is permitted is a branch to the 0th location. Any attempt to get at other locations by branch, execute, return or any other instructions will result in an improper procedure fault causing an appropriate interrupt.

IBM did not want to be left out of the time-sharing market and so built the 360/67 to support time-sharing. It did not fit into their normal marketing program and was not successful in the same way that the rest of the 360 line of computers were.[3]

A1 classification

The Orange Book[4] defines Class (A1): Verified Design as meeting the highest level of security of other classes with the "distinguishing feature that an analysis derived from formal design specification and verification techniques and the resulting high degree of assurance that the TCB is correctly implemented."

Co-processors

Co-processors or management processor are not a new idea. Even the Data General Eclipse mini-computer system released in 1974 came with a management co-processor on the mother board. What is most interesting is that the Eclipse served as a Maintenance and Control Unit (MCU) for the Cray-1 supercomputer. Layers upon layers.

Security Boundaries

  • The original security boundary was that between the core operating system (here called the kernel of the operating system) and the application programs. This was effectuated in the CTSS version of the IBM 7094 and the GE 645 by creating boundaries between different memory spaces that could only be breached with special traps from the application area to the kernel area. This was an absolute boundary in the beginning, but various performance additions to be hardware and its operating systems (like direct access to memory from optional hardware components) has made the boundary less secure. Attempts to re-secure this boundary have consistently be found to be inadequate.
  • Virtual Machine boundaries have been available since virtualization was introduced in the IBM 360. These virtual images have good security boundaries between each other, but very little with the host o/s. Still good security features have been based on them in Windows and Unix based machines. Some attacks have be created directly against the underlying computers, but those are limited to highly incentive actors.
  • The co-processor boundary is the best available within one machine and is recommended whenever that option is available. There are general co-processors available from ARM TrustZone and Intel Security Guard Extensions (SGX), but the security of them, particularly against timing attacks is not well known.[5] The TPM 2.0 design resides in one of these areas and have had good threat analyses developed. In particular timing attacks and side channel attacks are mitigated with the TPM. So the TPM 2.0 approach is the best known security boundary available today for commonly available processors.[6]
  • Protected memory needs to be available for the secure storage of secrets, that does require some level of hardware support which may include (1) a completely separate secure memory, (2) hardware protection of some main memory partitions that can still be broken as evidenced by the recent exploits of Intel memory mapping[7] and (3) inline encryption of secure data in main memory as implemented in the Intel Data Security Operation chip in 1995.

Solutions Today

The space of Trusted Execution Environments, Hardware Security Modules (HSM) and TPMs are merging into the broader category of Hardware-Enabled Security as reported by the Center for Hardware Security.[8] This is already very wide spread in servers and is increasingly common in personal mobile devices.

Portable Tokens

The most common TEE of all is the chip or smart card which has an serial interface via those 6-8 contacts visible on the face of the card. These are an excellent and portable way for a user to carry a TEE in their wallet. Many Enterprises use these today as authentication factors for employee access. Unfortunately the interface to user's other devices, like Smart Phones is difficult and the user credential problem was never solved in a way that ordinary users could accept.

FIDO U2F (and now web authentication) devices are simple key chain fobs that connect to computers via USB. With the advent or Smart Phones with USB C connects on the power cord, there is an easy way to connect these to Smart Phones. It seems though that the time has passed for user tokens, but time will tell if these gain any traction.

Common TEE Platform APIs

The Global Platform standard for a Trusted Execution Environment (TEE) is designed to reside alongside the normal smartphone or other Mobile Device Rich Execution Environment (REE) (where normal applications execute) and to provide a safe area of the Mobile Device to protect assets and execute trusted code. At the highest level, a Trusted Execution Environment (TEE) is an environment where the following are true: (a) any code executing inside the TEE is trusted in authenticity and integrity; (b) the other assets are also protected in confidentiality; (c) the TEE resists by design all known remote and software attacks, and a set of external hardware attacks; and (c) both assets and code are protected from unauthorized tracing and control through debug and test features

Access APIs

  • Before any of the functionality of the Trusted Execution Environment or Trusted Computing can have a measurable effect on user security, the functionality must be exposed to the programmers that code the applications that the users want. There are a few APIs, like OpenSSL which are accessible to the programmers, but the key problem of protecting keys used by applications programs is still fragmented. The following are the most important APIs addressing that issue.
  • With the completion of the Secure Android effort hosted by the NSA until 2018 it has been possible to use the ARM hardware to enable a Trusted Execution Environment. There have been various force arrayed against putting so much security in the hands of users[9] Recently Apple and others decided that it was in their interest to expose these capabilities to users in spite of growing pressure against providing strong encryption to consumers by the US government.[10] The challenge here is that improving Security Risk for consumers will created Security Risk for security services in governments.

Windows

  • Fairly early in the adoption of Public Key Cryptography Microsoft licensed the RSA API library and exposed it to programmers as a part of the Crypto API (CAPI). This had the advantage of abstracting all of the underlying key protection methods and so was quite successful. Unfortunately it was strongly oriented towards the Public Key Infrastructure (PKI) build around the X.509 Certificate and its binding between the user and the private key. This clumsy design based on bindings has frustrated users and programmers for decades. It was partially corrected by the Crypto Next Generation (CNG) API which was more closely bound to the important part, the public key itself.
  • While the Windows NT 4 operating system had most of the security features that programmers are familiar with today, there was a major whole opened in Windows XT when the old Windows graphics (GDI) and user modules were import with very few security enhancements and large chucks of memory that are shared between user and kernel mode. Attacks against that clumsy arrangement continue today, including this one that was release to the wild before it was patched[11]. This vulnerability in the most popular operating system for computer today should be an indication that strong security boundaries are important and should not be left up the the computer os vendors.
  • When the user sets up Windows Hello on their machine, it generates a new public–private key pair on the device. The trusted platform module (TPM) generates and protects this private key. If the device does not have a TPM chip, the private key is encrypted and protected by software. In addition TPM-enabled devices generate a block of data that can be used to attest that a key is bound to TPM. This attestation information can be used in your solution to decide if the user is granted a different authorization level for example. To enable Windows Hello on a device, the user must have either their Azure Active Directory account or Microsoft Account connected in Windows settings.

This table shows the security features available in recent versions of Windows UWP.

Build Version Security Features Recent News
10.0.18362 1903 twist
10.0.17763 1809 problems with roll-out deleting user files
10.0.17134 1803 t400
10.0.16299 1709 First support for NETSTANDARD2.0 Fall creators update 2017-10-17
10.0.10240 1507 KeyCredentialManager Class

Android

There used to be a secure version called SE-Android supported in the open-source community by the NSA. That has since been rolled into the current releases which makes the full security of the TEE integrated into Android, but the APIs available to the Native App are only slowly catching up.

This table shows the security features available in recent versions of Android. It is only with the advent of API 23 that hardware protection was explicitly enabled in the API. Implementations prior to that existed, but were not exposed in the API.

Ver API Security Features Recent News
10 29 Better support for biometric authentication in apps. Q (ran out of desserts)
9 28 StrongBox KeyStore APIs will generate and store private keys in Titan M hardware security module P KeyMaster 4 allows key import for sharing
8.1 27
8.0 26 Apps background execution and location limits. O
7.1 25 Fingerprint sensor gesture to open/close notification shade.
7.0 24 Added an Emergency information part. N
6.0 23 KeyInfo.isInsideSecureHardware() Native fingerprint reader. Keystore redesign in Android M
4.3 18 AndroidKeyStore and Extraction prevention still software only

Apple iOS

References

  1. Tom Van Vleck, The IBM 7094 and CTSS http://multicians.org/thvv/7094.html
  2. E. L. Glaser +2, System Design of a Computer for Time Sharing Applications, 1965 Fall Joint Computer Conference
  3. "IBM 360, Model 67, Computing Report for the Scientist and Engineer," 1, 1 (May 1965) p. 8, Data Processing Division, I.B.M. Corporation.
  4. Department of Defense, Trusted Computer Systems Evaluation Criteria DOD 5200.28-STD (1985-12)
  5. MITRE Modify Trusted Execution Environment https://attack.mitre.org/techniques/T1399/
  6. Onur Zengin, Mobile Platform Security: OS Hardening and Trusted Execution Environment https://www.owasp.org/images/8/88/Onur_Zengin_-_TEE_chapter_meeting_presentation.pdf
  7. Meltdown and Spectre. https://meltdownattack.com/
  8. Kathleen M. Moriarty, Built-in Security at Scale through Hardware Support (2023-12) https://www.cisecurity.org/insights/white-papers/built-in-security-at-scale-through-hardware-support
  9. Anit Vasudevan +2, Trustworthy Execution on Mobile Devices Springer (2014) https://www.springer.com/us/book/9781461481898
  10. Tami Abdollah, US attorney general says encryption creates security risk AP (2019-07-23) https://www.apnews.com/7423e1ef65a144e6a47e4da63683b3c1
  11. Davey Winder, Warning: Google Researcher Drops Windows 10 Zero-Day Security Bomb. Forbes (2019-06-12)https://www.forbes.com/sites/daveywinder/2019/06/12/warning-windows-10-crypto-vulnerability-outed-by-google-researcher-before-microsoft-can-fix-it

External Material

  1. OWASP TrustZone and Mobile Security (2015-10-15) https://www.owasp.org/images/c/c8/OWASP_Security_Tapas_-_TrustZone%2C_TEE_and_Mobile_Security_final.pdf
  2. Android Trusty is a set of software components supporting a Trusted Execution Environment on mobile devices. https://source.android.com/security/trusty/