The decoy realism engine.
Deniable encryption works on a simple property. Two passwords decrypt the same ciphertext into two different plaintexts, both of which look like the kind of thing somebody would actually encrypt. An attacker with the ciphertext, with the source code, and with the willingness to apply pressure, cannot tell which plaintext is the one you cared about. They get a choice. The choice has to be a real choice.
The cryptographic part of that property is solved. AES plus XOR plus two passwords. We have written about the construction at length on the Verify page and in the whitepaper. The construction is the easy part.
The hard part is the decoy. Specifically: where does it come from, and is it any good.
Why this is not a copy-paste problem
For the past eighteen months our decoys came from a hand-curated template pool. A few hundred entries across a handful of categories. Pick a category, sample a template, fill in placeholders, ship. It worked. It also had three problems we could not engineer around.
First, a static pool is a fingerprint. Anyone with the source code, the open weights of our SDK, or a single afternoon with our CLI can enumerate every template we ship. Once enumerated, every future decoy drawn from that pool can be flagged as a known-decoy string. The pool stops being a deniable plaintext and starts being a list of suspect plaintexts. A coercer who has done their homework no longer needs to guess. They check.
Second, the templates were generic. A burner wallet phrase. A made-up API key. A placeholder credit card number. None of them looked like your kind of burner wallet phrase, or your kind of API key. The shape was right. The texture was wrong. A forensic examiner reading the decoy would not immediately catch it, but they would notice that the decoy did not look like a thing somebody actually used.
Third, the pool was bounded by what we had time to write. Seventeen categories of secret, a few dozen templates each, capped at whatever the on-call engineer found realistic over a long afternoon. The cap meant repeats. The cap meant gaps. The cap meant we could not deny what we had not pre-written.
The infrastructure version of this product cannot have those three problems. So we built the realism engine.
What we shipped
The decoy realism engine generates a fresh, type-aware, shape-locked decoy per request. It runs in the hosted runtime (the Operate pillar), gated by your API key, rate-limited per tier, cost-capped at the platform level. It covers seventeen secret types out of the box and falls back to a freeform mode for anything outside the catalogue. Generation is LLM-driven across three provider families. Validation is local, deterministic, and runs before the decoy ever leaves the server.
The catalogue:
github_pat_ prefix, correct segment structure, correct length floor. Generation is length-strict.re_ prefix, plausible body. Inert against the Resend API.For the fifteen typed entries, the post-generation validator runs without exception. The decoy comes out of the model, gets parsed against the known structural rules for its type, gets checksummed where a checksum exists, and gets rejected if any check passes that would have validated it as real. A BIP39 phrase that accidentally checksum-validates is dropped and the model is asked again. A credit-card decoy that accidentally Luhn-passes is dropped and the model is asked again. The validator is not optional and is not on the model's honour. It runs locally, deterministically, in the hosted runtime, before the response is ever serialised back to your client.
That property is the whole point. We are not asking the LLM to be careful. We are asking the LLM to be plausible, and asking the validator to make sure plausibility does not collide with reality.
What this looks like to a coercer
The threat model is unchanged. A coercer with the ciphertext and your wrong password decrypts a plaintext. The plaintext is, by construction, what we just described. It has the shape of a real credential of the type they expected to find. It is the right length. It checksum-fails or structurally-invalidates against the live system of record, which the coercer cannot verify in the moment.
What changes with the realism engine is the next move. If the coercer thinks they have caught you out, their next step is to try and use the credential. They cannot. It does not authenticate. Which means one of two things, from their point of view: either they have caught you with a real-but-expired credential, or they have caught you with a decoy. The construction guarantees they cannot tell which.
That is the deniability. Algorithmic in the cryptography, operational in the decoy, structurally enforced at the boundary.
Why a generation engine, and not a bigger template pool
We could have hired a team and grown the template pool to a hundred thousand entries. Three reasons not to.
One. A hundred thousand entries is still a finite set. Anyone with read access to the pool, including any future malicious insider, gets the enumerable list of suspect plaintexts. A generation engine produces something new every time, and the output is not stored on our side at all.
Two. A template pool does not know about your context. The generation engine takes a short type hint and a free-form snippet, and produces a decoy that matches the texture of what you are encrypting, not just the shape. Your decoys for the credentials your platform actually uses will look like the kinds of credentials your platform actually uses.
Three. A generation engine improves. Every quarter the models get sharper, the validators get stricter, and the catalogue grows by adding generators rather than by adding rows. A static pool gets harder to maintain at scale. A generator gets easier.
The first reason is the security-critical one. The other two are the long-run reasons we will not regret this choice.
What we did not change
The cryptographic primitive is untouched. The library still ships Apache 2.0, in four languages, with reproducible test vectors. The SDKs do not call out to our hosted runtime by default. If you want to run deny.sh entirely offline with hand-written decoys, you still can. The realism engine is an opt-in capability of the hosted runtime, billed under your API key, available across all tiers with per-tier daily limits.
Your private data does not leave your client unless you ask for a decoy. When you do, the only thing the engine sees is your secret type and an optional short context hint. We do not see the secret. We do not see your password. We do not store the decoy we sent you. The audit log records the type, the timestamp, and the request id. That is the whole record.
Where this fits in the infrastructure
The decoy realism engine is the first piece of the AI-native moat we have been building toward. It is not a feature on the side. It is the part of the product that compounds. The engine improves with every release: more types, tighter validation, narrower acceptance bounds, smarter type inference. A competitor with a weekend and an LLM can ship a decoy generator. They cannot ship one that has been running in production against real traffic, with a real audit log, a real validator pipeline, and a real disclosure posture behind it. That is the kind of lead a generation engine accumulates by being live and by being part of an infrastructure stack rather than a standalone tool.
The launch is on Saturday 4 July 2026 at 08:00 BST. The realism engine is one of the things the launch will be standing on. It is in the hosted runtime today, behind the same authentication, rate limits, and cost meters as the rest of the API. If you would like early access during the pre-launch window, tell us what you are building.
← More posts on the blog · deny.sh · See the Operate pillar · Why deniability needs infrastructure