Trust Architecture

Consent isn’t a checkbox: EHDS Article 71 made it law. We made it work.

Consent isn’t a checkbox: EHDS Article 71 made it law. We made it work.

Three settlements landed in the first quarter of 2026. Together they add up to USD 69 million and something more consequential: a structural proof of failure.

Kaiser Permanente settled for USD 47.5 million. Sutter Health settled for USD 21.5 million. ING Bank Śląski settled for EUR 4.375 million. Across healthcare and finance, in three different jurisdictions. Each organization had written privacy policies. Each had a working consent dashboard. Users could log in, toggle preferences, update their records. The consent infrastructure was, in the conventional sense, present and functional.

And yet each organization lost the ability to honor consent the moment data crossed an institutional boundary.

The Kaiser case documents the sequence precisely: a patient’s secondary-use opt-out was recorded in the consent management system, acknowledged by the primary care provider, and then silently discarded when the record moved downstream to a research aggregator. The opt-out was not overridden. It was not ignored. It was simply invisible to the downstream controller — architecturally inaccessible. The same pattern recurs in the Sutter and ING filings. A consent preference that exists in one system cannot be read by another system that has already received the data.

This is not a policy failure. It is an architecture failure. And it explains why the 2026 settlements look the way they do.

The trap Article 71(8) set

EHDS Article 71 entered into force on 26 March 2025, under Reg. (EU) 2025/327. It makes secondary-use opt-out a legal right, not a courtesy feature. A person can withdraw consent to the use of their health data for research, statistics, or training — at any time, with the withdrawal binding on every controller who holds their data.

The obvious engineering response is a centralized list: maintain a registry of opted-out identifiers, check every downstream use against the list, block on a match. Fast, auditable, tractable.

Article 71(8) closes that door. It explicitly forbids re-identification registers — the very mechanism that makes a centralized opt-out list work. The logic behind the prohibition is sound: a list of identifiers correlated with opt-out decisions is itself a sensitive dataset. Its existence creates re-identification risk. The regulation was written to prevent exactly that risk, so it cannot permit the most obvious implementation path for honoring the right it creates.

This is the trap. The controller cannot keep a list of who opted out without recreating the harm the regulation was written to prevent. Article 71 creates a legal obligation to honor opt-out across institutional boundaries. Article 71(8) prevents the architectural approach that would make that feasible for centralized infrastructure. Recital 54 adds that the opt-out must be reversible and freely exercisable — no friction, no re-registration, no waiting period.

The regulation is precise. The architectural implication is radical.

Five invariants. Zero architectures that satisfy them all — until now.

Strip the legal language and read what regulators actually require. Across EHDS Article 71, GDPR Articles 7(3) and 17(2), and eIDAS 2.0 Article 5a, five engineering invariants appear:

Person-scoped identifiers. Opt-out must be attributable to a specific person, not a session, a device, or an account. The identifier must survive institutional boundaries.

Verifiable timestamped opt-out. A controller must be able to prove when the opt-out was recorded and that it has not been tampered with. Self-reported logs are insufficient; cryptographic proofs are required.

Reversibility without re-identification. EHDS Article 71(8) names this explicitly. The person can reverse their opt-out without the reversal requiring the creation of a linkable identity record.

Cross-controller propagation. GDPR Article 17(2) requires that a withdrawal of consent reach every downstream processor. The obligation does not stop at the controller who received the original consent.

Unlinkability. eIDAS 2.0 Article 5a(16)(b) requires that identity interactions be unlinkable across contexts, so that opt-out cannot be exploited to build a cross-institutional profile.

These five invariants are named in primary law. They are not interpretive positions. And they create a precise test for any consent architecture: does it satisfy all five?

X.509 PKI satisfies zero. A certificate chain can confirm the identity of a server, but it provides no mechanism for person-scoped identifiers that survive institutional boundaries, no cryptographic opt-out record independent of the controller’s own logs, and no unlinkability — by design, X.509 relies on the CA knowing who you are.

Federated identity satisfies propagation — when a federated provider receives an opt-out, it can propagate that to relying parties. But it satisfies propagation only by violating unlinkability. The federation provider has a complete view of which person interacted with which institution. That linkability is what makes propagation work. It is also what Article 5a(16)(b) forbids.

DKMS satisfies all five.

Why DKMS is the answer — and what that means in practice

Decentralized Key Management is the first structurally sound foundation for honoring opt-out end-to-end in cross-organisational, post-disclosure, regulator-watched conditions. The qualifying phrase matters. In single-tenant workloads, inside a single institution’s perimeter, simpler approaches are adequate. The structural problem only appears when data has crossed a boundary — when the person exercising their opt-out is no longer interacting with the first controller.

The mechanism works as follows. Each person holds a self-certifying identifier — an Autonomous Identifier, or AID — that serves as the cryptographic root of their consent authorizations. This AID is not registered with any central authority. It is self-generated and controlled entirely by the person who holds it. DKMS is built on KERI, an open Trust over IP Foundation specification, which defines how these identifiers work and how changes to them propagate.

Alongside the AID sits a Key Event Log — a tamper-evident, cryptographically chained record of every authorization and withdrawal. When a person opts out of secondary use, that withdrawal is recorded as a key event: timestamped, signed, and appended to the log. Any downstream controller who holds the person’s data can verify the current state of the authorization by reading the log directly — no central registry, no call to a shared database, no re-identification.

When a person reverses their opt-out, that reversal is also a key event. The subject drives the rotation. The controller’s obligation is to check, not to maintain its own interpretation of the state.

Cross-controller propagation follows from the log structure. Institution B, which received data from Institution A, does not need to receive a notification from Institution A. It can read the log itself. The propagation is pull-based and audit-ready.

Where DKMS does not deliver

Honesty about scope is what makes the thesis defensible.

Data that has been used to train a model cannot be unlearned by a key rotation. A derivative dataset — an aggregate, an anonymized subset, a statistical artifact — retains no connection to the original person’s AID. Opt-out cannot reach back into those derivatives. DKMS can provide forensic detection of when data was used before the opt-out was recorded; it cannot undo the use.

Backups taken before opt-out are subject to the same limits. The backup exists outside the authorization chain. Reaching it requires a separate policy decision, which DKMS can inform but not enforce autonomously.

A single dishonest counterparty can ignore the log. DKMS is not a compliance forcing function for actors who choose to violate their legal obligations. What it provides is an auditable record that makes the violation visible and provable — which is exactly what regulators and courts need to assign liability. The 2026 settlements were difficult to resolve precisely because the consent state was ambiguous at each institutional boundary. DKMS removes that ambiguity.

Single-tenant workloads within a single institution’s perimeter do not need DKMS for consent enforcement. Existing consent management systems work adequately when data never crosses a boundary. The structural requirement kicks in at the boundary.

Saying this up front is not weakness. It is what distinguishes an architecture argument from a product claim.

In production

Vereign’s deployment with Switzerland’s Health Info Net AG handles 800,000+ verified messages per month. That figure represents communications processed by SEAL — the encrypted swarm delivery system that serves as HIN’s secure channel to recipients outside the professional network. It is the production proof that the underlying architecture scales to institutional workloads in healthcare.

Stargate — the full trust infrastructure that adds decentralized identity, authorization layers, and cryptographic audit trails on top of that secure channel — entered early production from June 2026. Mass roll-out narrative will be available after summer 2026 as the deployment broadens.

FHIR-over-Stargate is architectural — production deployment depends on hospital onboarding; earliest plausible September 2026. Clinical data exchange via FHIR with Stargate’s consent enforcement layer inline is the target architecture for EHDS Article 71 compliance. The pieces are in place. The timeline depends on institution-by-institution onboarding, not on further engineering.

That is the honest picture. The architecture is sound. The production baseline exists. The path from current production to full EHDS Article 71 compliance is defined and in progress.

The full technical thesis — five invariants, mechanism detail, regulatory citations, boundary-of-honesty caveats, comparison table across X.509 / federated IAM / DKMS — is at /consent/.

If your organization is working through EHDS Article 71 implementation, GDPR Article 17(2) propagation obligations, or the consent architecture decisions that come with FHIR-based data exchange, a 30-minute consent architecture review maps your specific situation against these invariants. Book one here.

Verified communication, built and deployed — not just described.

Vereign's trust infrastructure is live across Swiss healthcare. Book a 30-minute architecture review to scope what sovereign communication means for your organisation.

Swiss Data Protection GDPR Compliant Open Source AGPLv3+ Swiss Hosting