Your AI agent handles secrets. Does it handle them well?
Every AI agent that's worth anything eventually needs to handle sensitive data. API keys. Database credentials. Customer PII. Payment tokens. Encryption keys.
Most of them handle it terribly.
The problem with plaintext
Here's what typically happens when an AI agent needs to store a secret: it writes it to a file, a database, or a context window. In plaintext. Maybe with a comment that says // TODO: encrypt this.
The secret then lives in conversation logs, tool outputs, checkpoint files, and potentially the model's training data. It's copied, cached, and replicated across infrastructure you don't control.
Environment variables help at the OS level. Vault services help at the infrastructure level. But at the agent level, where the AI is making decisions about what to store and how to store it, there's been no standard answer.
Why standard encryption isn't enough
You could teach your agent to encrypt secrets before storing them. That's better than plaintext. But it still leaves you with an encrypted blob that screams "something valuable is here."
In an audit, a breach, or a legal discovery process, encrypted data is a red flag. It invites the question: what are you hiding?
Deniable encryption changes the answer. The encrypted blob decrypts to different content depending on which control file you use. Under scrutiny, it produces something plausible and boring. The real data stays hidden behind a different control file that nobody knows exists.
MCP: the agent integration standard
Model Context Protocol (MCP) is becoming the standard way AI agents connect to external tools. Claude Desktop, OpenClaw, Cursor, and a growing number of agent frameworks support it natively.
deny.sh ships as an MCP server. That means any MCP-compatible agent can encrypt, decrypt, store, and split secrets without writing any integration code. Add the config block, and the tools appear:
{
"mcpServers": {
"deny-sh": {
"command": "npx",
"args": ["deny-sh-mcp"]
}
}
}
That's it. The agent now has access to 11 tools:
- 4 local tools (encrypt, decrypt, deny, shamir) that run entirely on the agent's machine with zero network calls. True zero-knowledge.
- 7 API tools (vault store/retrieve, usage check, etc.) for server-backed features like the encrypted vault and dead man's switch.
What this looks like in practice
Credential vaults. An agent that manages deployments can store API keys in the deny.sh vault, encrypted client-side before upload. If the vault is compromised, the attacker gets encrypted blobs that decrypt to decoy credentials.
Sensitive tool outputs. An agent that queries a database might receive PII in the response. Instead of logging it in plaintext, it encrypts the output immediately. The conversation log shows encrypted hex, not customer data.
Multi-agent secret sharing. A team of agents working on the same project can use Shamir secret splitting to distribute access. Any 3 of 5 agents can reconstruct a secret, but no single agent (or single compromised node) can access it alone.
Audit-safe logging. Every encrypt/decrypt operation is logged with timestamps and usage counts, but the plaintext is never stored server-side. You can prove to an auditor that secrets were handled, without revealing what they were.
The zero-knowledge guarantee
The local tools are the key differentiator. When an agent calls deny_encrypt, the encryption happens on the agent's machine using the deny-sh npm package. No API call. No network request. The plaintext never leaves the process.
This matters for compliance (GDPR, SOC 2, HIPAA) and it matters for trust. If your agent handles customer data, you need to be able to say: "the data was encrypted before it left the machine." With deny.sh's local MCP tools, that's provably true.
Add deniable encryption to your agent in 30 seconds.
MCP server included. No API key needed for local tools.
See the agent docs