← Back to blog

Why deniability needs infrastructure, not just a library.

12 May 2026 · 8 min read

A weekend hacker can write a deniable encryption library. The construction is not exotic. Pick a KDF, pick a stream cipher, XOR a control file against the ciphertext, derive two control files from the same password material so that two plaintexts share the same bytes. Three hundred lines of code. The unit tests fit on a screen.

If you only need to protect your own files on your own laptop, that is the whole product. Open the source, run the tests, encrypt the file, walk away.

Almost nobody is in that situation.

The people who actually need deniable encryption are running it on behalf of others. They are AI agent platforms holding ten thousand customers' OAuth tokens. They are custody products with exchange credentials sitting in a hot wallet. They are security teams whose breach playbook has to keep working after the playbook itself has been read by the attacker. For them, the algorithm is the smallest part of the problem. The big part is everything around it.

That is what we mean when we call deny.sh "deniability infrastructure", and not just an encryption library. The algorithm is the wedge. The infrastructure is the product.

The library question

Imagine we shipped a library and stopped there. A single SDK, four languages, Apache 2.0. deny.encrypt(real, decoy, primary, alt). Beautiful API. Test vectors against KATs in every language. We open a GitHub repo, we tweet a launch post, we move on.

A platform engineer at an AI agent company reads it. She likes the construction. She installs the SDK. Now she has to answer the following questions before she can ship it to production.

None of those are library questions. They are all operational. They are all required.

If we shipped a library and walked away, the platform engineer would have to build all of that herself. Most don't. They look at the workload, decide the operational lift is bigger than the cryptographic upside, and ship envelope encryption instead. Which is fine, until the master key leaks.

What we mean by infrastructure

"Infrastructure" gets used to dress up a SaaS product. We don't want to do that. So here is the working definition.

Infrastructure is the operational layer that makes a primitive usable at scale, on behalf of others, with someone other than the primitive's author accountable for the outcome.

Concretely, for deny.sh, that means three pillars.

01 / Encrypt

The primitive

The cryptographic library. Four languages, Apache 2.0, zero copyleft. Free for any use. Audit it yourself. Embed it anywhere. This is the wedge.

02 / Operate

The runtime

The hosted layer: per-tenant key isolation, structured audit log, named SLA, MCP server, regional deployment. This is what platforms pay for. You can rebuild it. Most don't.

03 / Verify

The trust posture

The published threat model, the formal construction write-up, the disclosure pipeline with a real GPG fingerprint, the reproducible build recipe. Things that take a year to fake. Things competitors can't shortcut.

The pillars are interlocking. The library without the runtime is a hobby tool. The runtime without the published trust posture is just another SaaS your security team will reject in vendor review. The trust posture without a public library is opaque. Each pillar is necessary. Only together do they form the category.

Why the library alone is not enough, in detail

Multi-tenant isolation is operational, not algorithmic

The cryptographic primitive is per-key, by design. It does not know what a tenant is. To make ten thousand customers' secrets safe from each other inside a single platform process, somebody has to wire up per-tenant key derivation, per-tenant audit scoping, and per-tenant rate limits. The algorithm cannot do that. The infrastructure must.

This is the reason Doppler, Infisical, HashiCorp Vault, and AWS Secrets Manager exist. The cryptographic primitive they wrap is not novel. The operational layer is the product. Deniability needs the same shape.

Audit posture is a year-long project

"We have AES-256" is not an answer your customer's compliance team accepts. They want SOC 2 Type II evidence. They want an independent cryptographic audit. They want a published threat model, a published construction write-up, a coordinated disclosure policy with a real GPG fingerprint, and reproducible builds. Some of those are documents. Some are processes. All of them take time that a library author has no reason to invest in.

We invest in them because they are the actual moat. An AI competitor can clone the SDK in a weekend. They cannot clone twelve months of audit posture in a weekend. By the time they could, ours has had another twelve months to compound.

Disclosure pipelines need a real human at the other end

A security library on GitHub is one drive-by issue away from being silently broken for six months. A vulnerability without a coordinated disclosure target is a vulnerability that gets weaponised before the maintainer hears about it. We publish a real GPG fingerprint, a real RFC 9116 security.txt, a 48-hour acknowledgement promise, and a safe harbour clause. That is not a library feature. It is an organisational commitment with a person on call.

Continuity matters more than uptime

Platforms running on us cannot afford to go dark when we do. So we ship a local-mode MCP fallback: the exact same primitive, runnable on the host machine with no network, no auth, no vendor lock. Your encrypt and decrypt paths keep working if we are unreachable. The library is the same. The continuity story is the infrastructure: a product decision, a documented runbook, a tested degraded mode. Not algorithm work.

The category claim, made plainly

Other product categories went through this transition decades ago.

Deniability has been stuck at the library stage for thirty years. VeraCrypt and the older Rubberhose are both libraries plus tooling. They never grew an operational layer because they were never aimed at a multi-tenant audience. We are.

This is the move: take a primitive that has been a research curiosity since the late nineties, wrap it in the operational layer the modern at-rest threat surface actually needs, and treat it as a category. Not a feature in someone else's secrets manager. Not a niche library on a forum. Infrastructure, with a noun you can put on a vendor-review form.

What we ship as a result

Every page on this site maps to one of the three pillars.

A library can give you deniable encryption. Only infrastructure can give you deniability. We build all three pillars because anything less stops short of the threat surface real customers actually face.

What this means for you

If you are a hobbyist encrypting your own seed phrase, you do not need the runtime or the audit posture. Open /encrypt, set two passwords, done. The Encrypt pillar is sufficient and it is free.

If you are a builder embedding deniability inside a product, you need the Encrypt pillar plus the Verify pillar. The SDK gives you the primitive. The public threat model and construction proof give you the answer your security reviewer will ask for.

If you are a platform operator running on behalf of other people, you need all three. The library inside your code, the hosted runtime under your tenants, and the trust posture printed on the vendor-review questionnaire. That is the audience for which deniability has to be infrastructure, because for anyone else, a library would have done.

The launch is on Saturday 4 July 2026 at 08:00 BST. The category we are launching into is the one we just defined. If we did the work right, by next year somebody will say "deniability infrastructure" the way they now say "secrets manager" without flinching at the word.

More posts on the blog · deny.sh · See the Operate pillar