EU AI Act Art. 13 & 86 GDPR Art. 22 & 28 SR 11-7 / PRA SS1/23 CKKS End-to-End

SHAP FHE for Regulators

Encrypted SHAP feature-attribution explanations for AI systems where sensitive input data must remain private — by mathematics, not by policy.

VaultBytes' CipherExplain computes SHAP values entirely under CKKS Fully Homomorphic Encryption. Inputs are encrypted on the client. The server evaluates the model and the attribution circuit homomorphically and returns ciphertext. Only the data subject's key can decrypt the explanation. This is the technical primitive regulated teams need to produce audit-ready AI explanations under the EU AI Act, GDPR Article 22, SR 11-7, and PRA SS1/23 — without creating a second plaintext copy of sensitive features at the explainer.

Why regulators care about SHAP FHE

Conventional explainability creates a second plaintext copy of sensitive features at the explainer. SHAP FHE eliminates that copy. The compliance-relevant difference is structural, not configurational.

EU AI Act Article 13

High-risk AI providers must give deployers explanations sufficient to interpret model outputs. SHAP FHE produces a reproducible per-decision attribution vector with a signed model-version trail — without requiring the deployer to surrender plaintext inputs to a third-party explainer.

EU AI Act Article 86

Affected persons have a right to a meaningful explanation of decisions made by high-risk AI. SHAP FHE delivers a feature-level attribution decrypted only on the subject's side, eliminating the "explanation processor" exposure that plaintext SHAP requires.

GDPR Article 22 & 28

Automated decisions need meaningful logic; processors need lawful basis and minimum-necessary data. SHAP FHE narrows the processor's data footprint to ciphertext only — there is no plaintext copy to govern, lose, or subpoena.

SR 11-7 / PRA SS1/23

Model risk frameworks require independent validation and reproducible artefacts. CipherExplain emits versioned attribution vectors, a deterministic feature ordering, and signed envelopes — usable as primary evidence in model risk files and supervisory examinations.

DPIA-ready architecture

The server is mathematically blind to plaintext features. A DPIA against a SHAP FHE deployment substitutes cryptographic guarantees (CKKS security level ≥128-bit) for organisational controls — easier to evidence and harder to defeat.

Optional DP layer

For workloads where the same subject is queried repeatedly, an (ε, δ)-differentially private mechanism applies to the published SHAP vector with a per-key daily ε budget. Documented neighbouring relations, closed-form sensitivities for LR, leaf-bound for tree ensembles.

Measured latency

Production soak tests on commodity cloud hardware. Reproducible — published reports available under NDA.

25.8s
Logistic regression p50 — CKKS end-to-end, 2 vCPU x86, d=50, K=100 importance-weighted, ext-basis BHDR (2026-05-12)
30.2s
Logistic regression p95 — same conditions
~73s
MLP (ReLU, degree-27 polynomial activation), 2 vCPU x86, d=50, K=390
~70s
XGBoost / LightGBM / DecisionTree via full-FHE OCTE, 2 vCPU AMD, T=100, D=4, K=40

Frequently asked by regulators

What is SHAP FHE for regulators?

SHAP FHE for regulators means computing SHAP feature-attribution explanations entirely under Fully Homomorphic Encryption. Sensitive features stay encrypted end-to-end; the inference server processes ciphertexts only. The decrypted output is an audit-ready attribution vector — usable for EU AI Act Article 13/86 documentation, GDPR Article 22 meaningful explanations, SR 11-7 model risk artefacts, and PRA SS1/23 file requests.

Why does the server never see plaintext features?

Inputs are encrypted on the client under CKKS before transmission. The server evaluates the model and computes SHAP values homomorphically. Results return ciphertext and are decrypted only on the client. This eliminates the trust assumption that the explanation provider keeps plaintext confidential, which is the central concern of GDPR Article 28 processor diligence and DPIA review.

Which models are supported under FHE?

Logistic regression and linear classifiers run under full CKKS with measured p50 latency of 25.8s on a 2 vCPU x86 cloud server. MLP (ReLU, degree-27 LP-optimal polynomial activation) runs at ~73s per explanation. XGBoost, LightGBM, and DecisionTree run via full-FHE OCTE at ~70s per explanation. RandomForest and GradientBoosting use a partial-FHE OCTE path. See CipherExplain for the full model matrix.

Does this satisfy the EU AI Act?

CipherExplain provides the technical primitive — encrypted, reproducible feature attributions with a model-version audit trail — that downstream compliance teams use to satisfy Article 13 (transparency to deployers) and Article 86 (post-decision explanation to affected persons) for high-risk AI systems. Compliance is a programme; this is a tested building block for that programme. The AI Act is phasing in across 2026–2027.

How is this different from regular SHAP?

Regular SHAP requires the explainer to see plaintext features. For regulated workloads — credit, insurance, healthcare, hiring — that plaintext detour creates a second copy of sensitive data and a second processor under GDPR. SHAP FHE removes the plaintext detour. The server is mathematically blind to inputs; only the data subject's own key can decrypt the explanation.

How do we evidence this in a supervisory examination?

Each call returns a signed envelope: model version hash, feature schema hash, sampler configuration, FHE parameter set, and the attribution vector itself. Pair with regaudit-fhe for fairness, drift, calibration, and provenance primitives in the same encrypted envelope.

What is the patent position?

VaultBytes Innovations Ltd has two PCT applications pending covering the underlying primitives (PCT/IB2026/053378 and PCT/IB2026/053405). Customers receive a perpetual licence under their commercial agreement.

Related products

CipherExplain →

The production API and SDK for SHAP FHE. Custom model registration, EU hosting, signed DPA, model-version audit trail, DP privacy budget controls. cipherexplain.html

regaudit-fhe →

Open-source depth-tracked audit primitives for privacy-preserving AI governance: fairness, drift, calibration, provenance, concordance, model disagreement. regaudit-fhe.html

Research index →

Underlying research: BHDR, PermNet-RM, KEM-CCT-Matrix. Reproducible benchmarks and peer-reviewable claims. /research/