← Back to blog

The decoy realism engine.

13 May 2026 · 7 min read

Deniable encryption works on a simple property. Two passwords decrypt the same ciphertext into two different plaintexts, both of which look like the kind of thing somebody would actually encrypt. An attacker with the ciphertext, with the source code, and with the willingness to apply pressure, cannot tell which plaintext is the one you cared about. They get a choice. The choice has to be a real choice.

The cryptographic part of that property is solved. AES plus XOR plus two passwords. We have written about the construction at length on the Verify page and in the whitepaper. The construction is the easy part.

The hard part is the decoy. Specifically: where does it come from, and is it any good.

Why this is not a copy-paste problem

For the past eighteen months our decoys came from a hand-curated template pool. A few hundred entries across a handful of categories. Pick a category, sample a template, fill in placeholders, ship. It worked. It also had three problems we could not engineer around.

First, a static pool is a fingerprint. Anyone with the source code, the open weights of our SDK, or a single afternoon with our CLI can enumerate every template we ship. Once enumerated, every future decoy drawn from that pool can be flagged as a known-decoy string. The pool stops being a deniable plaintext and starts being a list of suspect plaintexts. A coercer who has done their homework no longer needs to guess. They check.

Second, the templates were generic. A burner wallet phrase. A made-up API key. A placeholder credit card number. None of them looked like your kind of burner wallet phrase, or your kind of API key. The shape was right. The texture was wrong. A forensic examiner reading the decoy would not immediately catch it, but they would notice that the decoy did not look like a thing somebody actually used.

Third, the pool was bounded by what we had time to write. Seventeen categories of secret, a few dozen templates each, capped at whatever the on-call engineer found realistic over a long afternoon. The cap meant repeats. The cap meant gaps. The cap meant we could not deny what we had not pre-written.

The infrastructure version of this product cannot have those three problems. So we built the realism engine.

What we shipped

The decoy realism engine generates a fresh, type-aware, shape-locked decoy per request. It runs in the hosted runtime (the Operate pillar), gated by your API key, rate-limited per tier, cost-capped at the platform level. It covers seventeen secret types out of the box and falls back to a freeform mode for anything outside the catalogue. Generation is LLM-driven across three provider families. Validation is local, deterministic, and runs before the decoy ever leaves the server.

The catalogue:

01
BIP39 wallet phrase
12 or 24 words from the BIP39 list. Decoys are generated, then explicitly rejected and retried until they fail the BIP39 checksum. A live wallet phrase will never be served as a decoy.
02
Credit card
Real-shape PAN with valid issuer prefix and length. Every candidate is run through the Luhn algorithm and rejected if it passes. Decoys fail Luhn by construction.
03
IBAN
Country code, check digits, BBAN. Every candidate is mod-97 verified and rejected if it validates. Decoys are structurally correct but arithmetically invalid.
04
Anthropic API key
Real prefix shape, real length distribution, plausible character set. Never a key any Anthropic tenant could mint.
05
OpenAI API key
Project-scoped and legacy shapes both supported. Plausible byte distribution; will not authenticate against any OpenAI endpoint.
06
GitHub fine-grained PAT
Correct github_pat_ prefix, correct segment structure, correct length floor. Generation is length-strict.
07
AWS access key
AKIA / ASIA prefix shapes. Realistic distribution. Will not pair with any real AWS IAM identity.
08
Stripe secret key
Live and restricted-key shapes both supported. Plausible body, invalid against the Stripe API.
09
Resend API key
re_ prefix, plausible body. Inert against the Resend API.
10
JWT token
Three-segment header.payload.signature shape. Header and payload base64url-decode to plausible JSON. Signature is structurally valid but verifies against no key.
11
Private key (PEM)
Correct PEM header and footer for the requested algorithm. Body is base64-shaped and the right length. Will not parse as a working keypair.
12
Postgres URI
Real-shape connection string with credential, host, port, database, query parameters. Inert host.
13
MongoDB URI
SRV and standard shapes. Plausible cluster naming, inert hosts.
14
Generic password
Length and entropy targeted to your real one. Mixed-class, hard to dismiss, hard to confuse with the real one.
15
Generic secret
For anything that does not fit a known shape but looks like a credential.
16
Freeform secret
Open-ended generation with type inference from a short context hint you provide.
17
Freeform fallback
Anything not covered above. Hand-curated templates only, until we add a generator for the shape.

For the fifteen typed entries, the post-generation validator runs without exception. The decoy comes out of the model, gets parsed against the known structural rules for its type, gets checksummed where a checksum exists, and gets rejected if any check passes that would have validated it as real. A BIP39 phrase that accidentally checksum-validates is dropped and the model is asked again. A credit-card decoy that accidentally Luhn-passes is dropped and the model is asked again. The validator is not optional and is not on the model's honour. It runs locally, deterministically, in the hosted runtime, before the response is ever serialised back to your client.

That property is the whole point. We are not asking the LLM to be careful. We are asking the LLM to be plausible, and asking the validator to make sure plausibility does not collide with reality.

What this looks like to a coercer

The threat model is unchanged. A coercer with the ciphertext and your wrong password decrypts a plaintext. The plaintext is, by construction, what we just described. It has the shape of a real credential of the type they expected to find. It is the right length. It checksum-fails or structurally-invalidates against the live system of record, which the coercer cannot verify in the moment.

What changes with the realism engine is the next move. If the coercer thinks they have caught you out, their next step is to try and use the credential. They cannot. It does not authenticate. Which means one of two things, from their point of view: either they have caught you with a real-but-expired credential, or they have caught you with a decoy. The construction guarantees they cannot tell which.

That is the deniability. Algorithmic in the cryptography, operational in the decoy, structurally enforced at the boundary.

Why a generation engine, and not a bigger template pool

We could have hired a team and grown the template pool to a hundred thousand entries. Three reasons not to.

One. A hundred thousand entries is still a finite set. Anyone with read access to the pool, including any future malicious insider, gets the enumerable list of suspect plaintexts. A generation engine produces something new every time, and the output is not stored on our side at all.

Two. A template pool does not know about your context. The generation engine takes a short type hint and a free-form snippet, and produces a decoy that matches the texture of what you are encrypting, not just the shape. Your decoys for the credentials your platform actually uses will look like the kinds of credentials your platform actually uses.

Three. A generation engine improves. Every quarter the models get sharper, the validators get stricter, and the catalogue grows by adding generators rather than by adding rows. A static pool gets harder to maintain at scale. A generator gets easier.

The first reason is the security-critical one. The other two are the long-run reasons we will not regret this choice.

What we did not change

The cryptographic primitive is untouched. The library still ships Apache 2.0, in four languages, with reproducible test vectors. The SDKs do not call out to our hosted runtime by default. If you want to run deny.sh entirely offline with hand-written decoys, you still can. The realism engine is an opt-in capability of the hosted runtime, billed under your API key, available across all tiers with per-tier daily limits.

Your private data does not leave your client unless you ask for a decoy. When you do, the only thing the engine sees is your secret type and an optional short context hint. We do not see the secret. We do not see your password. We do not store the decoy we sent you. The audit log records the type, the timestamp, and the request id. That is the whole record.

Where this fits in the infrastructure

The decoy realism engine is the first piece of the AI-native moat we have been building toward. It is not a feature on the side. It is the part of the product that compounds. The engine improves with every release: more types, tighter validation, narrower acceptance bounds, smarter type inference. A competitor with a weekend and an LLM can ship a decoy generator. They cannot ship one that has been running in production against real traffic, with a real audit log, a real validator pipeline, and a real disclosure posture behind it. That is the kind of lead a generation engine accumulates by being live and by being part of an infrastructure stack rather than a standalone tool.

The launch is on Saturday 4 July 2026 at 08:00 BST. The realism engine is one of the things the launch will be standing on. It is in the hosted runtime today, behind the same authentication, rate limits, and cost meters as the rest of the API. If you would like early access during the pre-launch window, tell us what you are building.

More posts on the blog · deny.sh · See the Operate pillar · Why deniability needs infrastructure