Encrypted SHAP feature-attribution explanations for AI systems where sensitive input data must remain private — by mathematics, not by policy.
VaultBytes' CipherExplain computes SHAP values entirely under CKKS Fully Homomorphic Encryption. Inputs are encrypted on the client. The server evaluates the model and the attribution circuit homomorphically and returns ciphertext. Only the data subject's key can decrypt the explanation. This is the technical primitive regulated teams need to produce audit-ready AI explanations under the EU AI Act, GDPR Article 22, SR 11-7, and PRA SS1/23 — without creating a second plaintext copy of sensitive features at the explainer.
Conventional explainability creates a second plaintext copy of sensitive features at the explainer. SHAP FHE eliminates that copy. The compliance-relevant difference is structural, not configurational.
High-risk AI providers must give deployers explanations sufficient to interpret model outputs. SHAP FHE produces a reproducible per-decision attribution vector with a signed model-version trail — without requiring the deployer to surrender plaintext inputs to a third-party explainer.
Affected persons have a right to a meaningful explanation of decisions made by high-risk AI. SHAP FHE delivers a feature-level attribution decrypted only on the subject's side, eliminating the "explanation processor" exposure that plaintext SHAP requires.
Automated decisions need meaningful logic; processors need lawful basis and minimum-necessary data. SHAP FHE narrows the processor's data footprint to ciphertext only — there is no plaintext copy to govern, lose, or subpoena.
Model risk frameworks require independent validation and reproducible artefacts. CipherExplain emits versioned attribution vectors, a deterministic feature ordering, and signed envelopes — usable as primary evidence in model risk files and supervisory examinations.
The server is mathematically blind to plaintext features. A DPIA against a SHAP FHE deployment substitutes cryptographic guarantees (CKKS security level ≥128-bit) for organisational controls — easier to evidence and harder to defeat.
For workloads where the same subject is queried repeatedly, an (ε, δ)-differentially private mechanism applies to the published SHAP vector with a per-key daily ε budget. Documented neighbouring relations, closed-form sensitivities for LR, leaf-bound for tree ensembles.
Production soak tests on commodity cloud hardware. Reproducible — published reports available under NDA.
SHAP FHE for regulators means computing SHAP feature-attribution explanations entirely under Fully Homomorphic Encryption. Sensitive features stay encrypted end-to-end; the inference server processes ciphertexts only. The decrypted output is an audit-ready attribution vector — usable for EU AI Act Article 13/86 documentation, GDPR Article 22 meaningful explanations, SR 11-7 model risk artefacts, and PRA SS1/23 file requests.
Inputs are encrypted on the client under CKKS before transmission. The server evaluates the model and computes SHAP values homomorphically. Results return ciphertext and are decrypted only on the client. This eliminates the trust assumption that the explanation provider keeps plaintext confidential, which is the central concern of GDPR Article 28 processor diligence and DPIA review.
Logistic regression and linear classifiers run under full CKKS with measured p50 latency of 25.8s on a 2 vCPU x86 cloud server. MLP (ReLU, degree-27 LP-optimal polynomial activation) runs at ~73s per explanation. XGBoost, LightGBM, and DecisionTree run via full-FHE OCTE at ~70s per explanation. RandomForest and GradientBoosting use a partial-FHE OCTE path. See CipherExplain for the full model matrix.
CipherExplain provides the technical primitive — encrypted, reproducible feature attributions with a model-version audit trail — that downstream compliance teams use to satisfy Article 13 (transparency to deployers) and Article 86 (post-decision explanation to affected persons) for high-risk AI systems. Compliance is a programme; this is a tested building block for that programme. The AI Act is phasing in across 2026–2027.
Regular SHAP requires the explainer to see plaintext features. For regulated workloads — credit, insurance, healthcare, hiring — that plaintext detour creates a second copy of sensitive data and a second processor under GDPR. SHAP FHE removes the plaintext detour. The server is mathematically blind to inputs; only the data subject's own key can decrypt the explanation.
Each call returns a signed envelope: model version hash, feature schema hash, sampler configuration, FHE parameter set, and the attribution vector itself. Pair with regaudit-fhe for fairness, drift, calibration, and provenance primitives in the same encrypted envelope.
VaultBytes Innovations Ltd has two PCT applications pending covering the underlying primitives (PCT/IB2026/053378 and PCT/IB2026/053405). Customers receive a perpetual licence under their commercial agreement.
The production API and SDK for SHAP FHE. Custom model registration, EU hosting, signed DPA, model-version audit trail, DP privacy budget controls. cipherexplain.html
Open-source depth-tracked audit primitives for privacy-preserving AI governance: fairness, drift, calibration, provenance, concordance, model disagreement. regaudit-fhe.html
Underlying research: BHDR, PermNet-RM, KEM-CCT-Matrix. Reproducible benchmarks and peer-reviewable claims. /research/