Difference between revisions of "Capability"
 (→Full Title or Meme)  | 
				 (→Full Title or Meme)  | 
				||
| Line 7: | Line 7: | ||
Think of it like a cryptographic key: if you hold it, you can open the lock.  | Think of it like a cryptographic key: if you hold it, you can open the lock.  | ||
| − | * [https://gluufederation.medium.com/entitlements-to-capabilities-744117a710c9 Entitlements to Capabilities] Mike   | + | * [https://gluufederation.medium.com/entitlements-to-capabilities-744117a710c9 Entitlements to Capabilities] Mike Schwartz 2025-10-03  | 
==Problems==  | ==Problems==  | ||
Latest revision as of 17:23, 2 November 2025
Contents
Full Title or Meme
Capability refers to the set of valuable functions that a Entity can effectively bring to bear on a problem. A person's capability represents the effective freedom of an Entity to choose between different functioning combinations – between different kinds of life – that she has reason to evaluate and decide.
- A [[[Capability]] is an **unforgeable token of authority**.
 - It simultaneously **designates a resource** (what you want to access) and **authorizes the rights** (what you can do with it).
 - Possession of the capability itself is sufficient proof of access — no further lookup in a central permissions table is required.
 
Think of it like a cryptographic key: if you hold it, you can open the lock.
- Entitlements to Capabilities Mike Schwartz 2025-10-03
 
Problems
Steve Lipner on capabilities. Since the basic assumption of a capability system is that access to data is granted by access to the capability, and that capabilities can be "passed" from user to user, process to process, they are a nightmare from the perspective of covert channels. You really have to perform heroic unnatural acts to defeat covert channels in a capability system. The use of a capability model seems to simplify some formal verification, so a lot of systems tried to implement or simulate them, but it always seemed to me like adding extra complexity and mechanism if you were worried about covert channels.
Tom Jones  12:14 PM
thanks - i asked because the capability genie is trying to get out of the bottle again.  This helps my attempt to keep the cork in the bottle.
Steve Lipner sent the following messages at 1:12 PM
There's a sort of capability-like system called Cheri that was funded by DARPA and developed at Cambridge. It's gotten some uptake in the research community and what I guess I'd call "cautiously favorable reaction" from the vendor community. You have to access memory through capabilities, but their primary function is to control and limit access to prevent memory safety errors. I think of it as more a protection mechanism to support segmented memory (as in Multics) than a security model. To my knowledge, there hasn't been any attempt to build a sharing or (user level) access control mechanism based on the capabilities.
Local Access Capability
one of the most important alternatives to traditional access control lists (ACLs) in computer security and governance.
How It Works - **Traditional ACL model**:
- Each resource has a list of who can access it and how. - The system checks your identity against that list every time.
- **Capability model**:
- Each user or process holds a set of capabilities (tokens). - To access a resource, you present the capability. - The system doesn’t need to check a central list — the capability itself encodes the authority.
---
Advantages - **Least privilege by design**: You only hold the capabilities you’ve been explicitly given. - **Delegation**: You can pass a capability to another process or user, granting them the same rights. - **No ambient authority**: Unlike ACLs, you can’t accidentally use privileges you weren’t explicitly handed. - **Decentralization**: Security is “pushed to the edge” — no single central table to compromise.
Problems It Solves - **Confused deputy problem**: A program can’t be tricked into misusing its authority, because it only acts with the capabilities it holds. - **Ambient authority trap**: Eliminates the risk of programs having more power than they need just because of who launched them.
Real‑World Analogies - **Physical world**: A hotel keycard is a capability. It doesn’t matter who you are; if you hold the card, you can open the room. - **Digital world**:
- **Bitcoin/private keys**: “Your keys are your money.” - **Cloud storage links**: A shareable link with embedded rights is a capability.
Takeaway An **access capability model** replaces identity‑based permission checks with **token‑based authority**. It’s elegant because it collapses *designation* (what resource) and *authorization* (what rights) into a single, unforgeable object.
- references Wikipedia on capability‑based security; Storj developer docs on capability‑based access.
 
CHERI (Capability Hardware Enhanced RISC Instructions)
- What is CHERI?
 
- **CHERI** is a joint research project between the **University of Cambridge** and **SRI International**, funded initially by **DARPA** and later supported by UKRI, EPSRC, ERC, Google, Microsoft, and Arm. - It extends conventional processor architectures (like RISC-V and Arm) with **capability-based security features**. - The goal: **fine-grained memory protection** and **software compartmentalization** to prevent common vulnerabilities in C/C++ programs.
---
- Key Features
 
- **Capabilities instead of raw pointers**: Memory references carry bounds and permissions, enforced by hardware. - **Memory safety**: Prevents buffer overflows, use-after-free, and other memory corruption exploits. - **Compartmentalization**: Enables breaking software into isolated components, limiting the blast radius of vulnerabilities. - **Hybrid model**: Works alongside traditional MMU-based virtual memory, so it can be incrementally adopted in existing ecosystems.
---
- Cambridge’s Role
 
- Led by **Robert N. M. Watson**, **Simon Moore**, and **Peter Sewell** at Cambridge’s Department of Computer Science and Technology. - Cambridge has produced formal models, hardware prototypes, and software toolchains to demonstrate CHERI’s practicality. - In 2022, **Arm released the Morello prototype board**, a CHERI-enabled processor for experimentation.
---
- CHERI Alliance
 
- The **CHERI Alliance**, based in Cambridge, UK, is a non-profit consortium promoting global adoption of CHERI technology. - It brings together academia, industry, and government to standardize CHERI and push it into commercial products. - Mission: to tackle the **global memory safety crisis** by embedding CHERI principles into mainstream hardware.
---
- Why It Matters
 
- Memory safety bugs account for a huge fraction of critical vulnerabilities in modern software. - CHERI offers a **hardware-rooted solution** that could dramatically reduce these risks. - It’s seen as a **successor to traditional capability systems**, but with practical deployment paths into today’s architectures.
---
You can explore more on Cambridge’s [official CHERI research page](https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/) or the [CHERI Alliance site](https://cheri-alliance.org/who-we-are/).
Would you like me to create a **dialectical comparison** between CHERI and earlier capability systems (like Dennis & Van Horn’s model or Multics segmentation), showing how Cambridge reframed the idea for modern hardware?
Network Access Capability
- Eleanor Hayes Meritt Technology Executive | Identity & Access Management, Technology Executive | Identity & Access Management
 
Why Most AI Security Discussions Miss the Point
When it comes to securing AI agents, the focus often shifts to OAuth, JWTs, and identity providers. However, the real challenge lies in controlling the actions of AI agents with access privileges. How can you ensure that an AI agent operates on your behalf without exposing sensitive credentials or overstepping its boundaries?
Enter capability models, offering a modern, lightweight, scalable solution to this dilemma.
Three Approaches to Capability Security:
1. OCap (Object-Capability Model)
- Power is held through references. - Delegation occurs via passing references. - Ideal for systems/languages, less prevalent in distributed APIs.
2. Macaroons
- Cryptographic tokens with constraints. - Authority can be limited with time or specific services. - Delegation involves creating a more restricted macaroon. - Drawback: challenging to revoke once issued.
3. Biscuits
- An advanced version of macaroons. - Utilizes public key cryptography and logic policies. - Transferable across federated systems. - Delegation is achieved through logic blocks. - More intricate but with added weight.
Why It Matters for AI Agents:
To ensure responsible AI assistants, the reliance on full OAuth tokens must diminish. OCap embodies the concept of least authority by design. Meanwhile, macaroons and biscuits exemplify practical solutions: portable, verifiable tokens allowing for delegation and constraint. However, both lack an efficient revocation method, necessitating additional infrastructure.
In the realm of securing AI agent delegation, would you lean towards macaroons for their simplicity and lightweight nature, or do biscuits appeal to you for their flexibility and compatibility across federated systems? Would love to know what you are thinking.