./explain --fhe --audit-ready
Homomorphic Encrypted SHAP explanations. Patent-pending technology that computes SHAP feature attributions entirely under Fully Homomorphic Encryption — with fhe_mode='ckks', the server never sees your plaintext data. Register your own models via a secure JSON API — no training data leaves your environment.
Encrypted AI blocks explainability — and regulators now require both.
GDPR, HIPAA-aligned programs, and sectoral controls often require strong safeguards for personal data, and encryption is one important technical measure. The EU AI Act (Art. 13 & 86) requires transparency and explanations for high-risk AI systems, with obligations phasing in from August 2026 through 2027. Feature-level explanations are one established approach to meet these transparency requirements. Combining the two is operationally hard: most explainability methods need plaintext access, which leaves banks, healthcare networks, and hiring platforms juggling pseudonymisation, on-prem builds, and contractual controls to bridge the gap.
Reproducible reference prototype and validated benchmarks.
Compute feature attribution explanations entirely under FHE when using fhe_mode='ckks'.
Threat model. CipherExplain protects your input features against an honest-but-curious server. The server processes ciphertexts and returns encrypted results; your plaintext data never leaves your machine. This deployment assumes the server does not gain access to decryptions of ciphertexts it produced (no decryption oracle), consistent with standard CKKS usage patterns.
⚠️ CKKS encrypted mode by model family
enable_fhe_octe=True. Sign gate, coalition composition, and path traversal stay encrypted; server never learns which leaf was reached. Validated at T ≤ 100, D ≤ 4 on a 2 vCPU AMD cloud server (~70s per explanation).enable_fhe_octe=true on POST /models/register. FHE sign gate + encrypted coalition composition; path product evaluated in plaintext after decrypt (reveals which leaf was selected, not input features). Requires StandardScaler in the spec.Register any trained sklearn model with the API — weights only, no training data, no pickle. Your data never leaves your environment. With fhe_mode='ckks', SHAP explanations run server-side under FHE and results return to you encrypted.
RandomForestClassifier, GradientBoostingClassifier, DecisionTreeClassifier and regressor variants via high-level register_* helpers. SHAP error 0.05% (P4, 19/19 tests pass)register_xgboost() and register_lightgbm(); binary classifiers; full-FHE P26.2-PPK pipeline via enable_fhe_octe=True (T ≤ 100, D ≤ 4). ~70s per explanation on a 2 vCPU AMD cloud server (K=40 stratified, measured).nn.Sequential / MLPClassifier via register_mlp(); CKKS-evaluated under LP-optimal degree-27 ReLU approximation. 73s per explanation measured on a prod 2 vCPU x86 cloud server via the diagonal-encoded coalition-packed path (d=50, K=390). Opt-in linear_surrogate=true (per-request) routes to a rank-1 Jacobian linear surrogate at ~7s per explanation with reported error_bound=0.15 (measured L∞ 0.062) — explicit accuracy / latency trade for triage workflows.register_catboost(); binary classification; hosted plaintext TreeSHAP. FHE under CKKS exists as a research circuit but is not yet wired into the production fhe_mode='ckks' path.register_pytorch_mlp(); nn.Sequential and custom nn.Module; ReLU only; extracts weights client-side, routes through PANCE FHE path.Pipeline auto-unwrap — register_pipeline() strips embedded StandardScaler and registers the wrapped estimatorLogisticRegression, LinearSVC, and any object with coef_ / intercept_ via from_weights()StandardScaler embedded in spec — raw inputs auto-scaled on /explain_rawmodel_version_id for audit trails — list versions via GET /models/{id}/versions/explain response includes a cryptographic proof that φ = M · y (soundness 2^−249). Verify client-side in ~4ms. GET /models/{id}/commitment returns the KZG commitment. Live in prod (CE_FREI_KZG_ENABLED=true)./explain response carries a Pedersen commitment to the canonical ciphertext bytes plus a Schnorr Σ-IPA proof. The SDK's Layer 7 verifier re-derives the FS transcript and rejects any response the server tampered with. Composes with FreiKZG so a malicious operator cannot fabricate φ even if they replace the regression matrix M. Soundness 2^−128 against classical adversaries. Live in prod (CE_VFHE=1, p50 latency overhead +0.7%). See Corollary 3 (LCV-full-FHE).Five layers a procurement officer can independently verify before signing an MSA. Each is enforced by code in production today, not promised on a roadmap.
/models/register endpoint accepts a JSON-only spec — coefficients, intercepts, tree node arrays, MLP weight matrices. No pickle, no joblib, no __reduce__. The Pydantic schema rejects unknown keys; the engine deserialises only the typed fields. A malicious customer can't smuggle code; the server can't unpickle a back-door.GET /models/{id}/attestation. Every /explain response carries per-layer inclusion proofs against that root — the SDK's Layer 6 verifier rejects any response that touched weights you did not register. Reference: Garg et al., "Experimenting with Zero-Knowledge Proofs of Training," CCS 2023, eprint 2023/1345.(d, c), not on your data). At registration we publish a KZG commitment to M; GET /models/{id}/commitment returns it. The SDK's Layer 5 verifies every /explain's FreiKZG proof against that cached commitment. Soundness 2^−249 over BLS12-381.sha256(your_X-API-Key) as the namespace key. Cross-tenant lookups return 404, never the wrong customer's model. The key_namespace_prefix in the attestation response confirms the registered model is in your namespace, not another tenant's.All five layers ship in the SDK at pip install cipherexplain. Registering a model is a 4-line Python call; the SDK fetches and verifies the attestation automatically before returning the model handle. The cryptographic primitives are documented in Sections 4–7 of the v3 paper; reproduction code is available under evaluation NDA.
Plain English. No maths required.
You send a feature vector — the attributes of the specific case you want explained. These are the same numbers your model used to make its prediction.
{
"model_id": "loan-risk-v1",
"features": [35, 55000, 0.3, 1, 8 ]
// ↑ ↑ ↑ ↑ ↑
// age income debt new yrs_emp
}
{
"prediction": 0.72,
"base_rate": 0.50,
"shap_values": [0.08, 0.18, -0.06, 0.02, 0.00],
"feature_names": ["age","income","debt","new","yrs"]
}
prediction: 0.72 — the model is 72% confident this applicant will repay. Your application maps this to "Approved" or "Low risk" — the label is your code's job, not ours.
base_rate: 0.50 — the average prediction across all applicants. This is the neutral starting point before any features are considered.
The SHAP values explain the gap from 0.50 to 0.72. Income drove most of it (+0.18). Debt ratio pulled it back (−0.06).
Each SHAP value is a signed number. Positive = pushed the prediction up. Negative = pushed it down. The size tells you how much relative to the other features.
Main approval driver. Income was the single biggest reason the model said yes.
Moderate positive signal. Added some confidence but was not the deciding factor.
Worked against approval. Still approved overall, but the debt ratio reduced confidence.
Near-zero impact. Being a new customer barely changed this prediction either way.
Your features are whatever your model was trained on.
// Input "features": [120, 7.2, 28.5, 1, 55 ] // ↑ ↑ ↑ ↑ ↑ // bp glucose bmi diabetic age // Output "prediction": 0.87 // → your app shows "High risk" "shap_values": [0.05, 0.31, 0.08, 0.12, -0.09] // bp glucose bmi diab age
The features are transaction and session attributes.
// Input "features": [249.99, 2, 44, 1, 0 ] // ↑ ↑ ↑ ↑ ↑ // amount hour country new vpn_flag // Output "prediction": 0.94 // → your app flags "Suspected fraud" "shap_values": [0.02, 0.38, 0.11, 0.08, -0.05] // amt hour cty new vpn
The API works identically across all domains. The features, label names, and business logic all live in your application. CipherExplain handles the encrypted computation and returns numbers.
Every explanation response declares which FHE mode actually ran — so you always know whether your data was encrypted end-to-end or fell back to plaintext.
{
"fhe_mode_requested": "execute",
"fhe_mode_used": "ckks_engine",
"fhe_mode_reason": null,
"model_version_id": "mv_01HXYZ..."
}
Every number below is reproducible from the working prototype.
Full reproduction package and additional benchmarks available under evaluation NDA.
Specific parameters for every benchmark on this page. Reproducible.
All CKKS SHAP numbers on this page use:
N = 163842^40, security level 128-bit (HEStd_128_classic)Security level is a parameter-set estimate against the OpenFHE 1.2.0 HEStd_128_classic profile. Production deployments should re-validate against their chosen HE estimator (e.g. lattice-estimator) and library version.
Encrypted SHAP explanations — hosted API with a single key.
Sign up instantly — no waitlist. Enter your work email, verify with a 6-digit code, and your key arrives immediately.
→ Get a free key (3 runs/month) or upgrade to Developer (£299/mo).
vb_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Load the demo model, then call /explain_raw with raw (unscaled) values:
import requests
BASE = "https://cipherexplain.vaultbytes.com"
HDR = {"X-API-Key": "vb_..."}
# One-time: load the built-in demo credit model
requests.post(f"{BASE}/startup", headers=HDR)
# Send the raw feature values for one person
r = requests.post(f"{BASE}/explain_raw", headers=HDR,
json={
"model_id": "credit_model",
"features": [38, 13, 0, 0, 40 ]
# ↑ ↑ ↑ ↑ ↑
# age edu marital occup hrs/week
}
)
data = r.json()
# What comes back:
# data["prediction"] → 0.74 (74% probability — your app maps to a label)
# data["base_rate"] → 0.50 (average across all cases — the neutral baseline)
# data["shap_values"] → [0.12, -0.31, 0.05, 0.03, 0.09]
# data["feature_names"] → ["age", "education-num", "marital", "occup", "hours"]
#
# Reading the SHAP values:
# education-num: -0.31 → biggest factor, pushed prediction DOWN
# age: 0.12 → pushed it up
# hours/week: 0.09 → positive signal
# occupation: 0.05 → small positive
# marital: 0.03 → almost no effect
curl -s -X POST \
https://cipherexplain.vaultbytes.com/explain_raw \
-H "X-API-Key: vb_..." \
-H "Content-Type: application/json" \
-d '{
"model_id": "credit_model",
"features": [38, 13, 0, 0, 40]
}' | python3 -m json.tool
POST /startup → load demo credit model
GET /models → list your registered models
POST /models/register → register your own model
GET /models/{id}/versions → list model versions (audit)
GET /models/{id}/commitment → KZG commitment for FreiKZG verify
DELETE /models/{id} → remove a model
POST /explain → SHAP (pre-scaled features)
POST /explain_raw → SHAP (raw values, auto-scaled)
POST /explain/batch → async batch — webhook delivery
GET /explain/batch/{job_id} → poll batch job status
POST /report → generate PDF audit report
POST /keys/rotate → rotate your API key
GET /usage → quota used this month
GET /usage/dp → remaining DP privacy budget (ε)
GET /health → status (no key needed)
Your model and data stay local. Only trained weights (numbers) are sent — no training data, no pickle files, no arbitrary code.
pip install cipherexplain
from cipherexplain_sdk import CipherExplainClient
client = CipherExplainClient(api_key="vb_...")
# --- sklearn Pipeline auto-unwrap (embedded scaler stripped) ---
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
pipe = Pipeline([("scaler", StandardScaler()),
("clf", LogisticRegression())]).fit(X, y)
client.register_pipeline("my_model", pipe, feature_names, X_train=X)
# --- XGBoost binary classifier ---
import xgboost as xgb
booster = xgb.XGBClassifier().fit(X, y)
client.register_xgboost("my_xgb", booster, feature_names)
# --- LightGBM binary classifier ---
import lightgbm as lgb
gbm = lgb.LGBMClassifier().fit(X, y)
client.register_lightgbm("my_lgb", gbm, feature_names)
# --- MLP (ReLU) — CKKS-evaluated ---
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(hidden_layer_sizes=(16, 8)).fit(X, y)
client.register_mlp("my_mlp", mlp, feature_names, X_train=X)
# --- Any other framework — raw weights ---
from cipherexplain_sdk import from_weights
spec = from_weights(coef, intercept, "my_linear",
feature_names, classes=[0, 1])
client.register(spec)
# Explain with full FHE + optional DP noise
result = client.explain_raw("my_model", x_raw,
fhe_mode="execute", apply_dp=True)
print(result["shap_values"])
print(result["model_version_id"]) # audit pin
print(result["fhe_mode_used"]) # "ckks_engine"
| Model family | Hosted plaintext SHAP | CKKS / FHE mode | Notes |
|---|---|---|---|
| Logistic regression | Yes | Yes (default) | Production prototype, measured |
| Linear SVM | Yes | Yes (default) | Production prototype |
| MLP (ReLU) | Yes | Yes (default) | Production prototype, measured on prod 2 vCPU x86 (73s, diagonal-coalition path) |
| DecisionTree | Yes | Opt-in (enable_fhe_octe) | Full-FHE path product, bounded T/D |
| XGBoost / LightGBM | Yes | Opt-in (enable_fhe_octe) | Full-FHE path product, Enterprise compute |
| RandomForest / GradientBoosting | Yes | Partial — opt-in via enable_fhe_octe=true on POST /models/register | FHE sign gate + encrypted coalition composition; path product evaluated in plaintext after decrypt. Requires StandardScaler in the spec. |
| CatBoost | Yes | Business / Enterprise | Hosted plaintext TreeSHAP via register_catboost() on every tier. Full-FHE oblivious-tree circuit validated (sign-flip-free, axiom 1e-17, SHAP L∞ 0.008); deployed per-customer on a dedicated host for Business and Enterprise contracts. |
Each API key has a model slot quota by tier:
Delete a model to free its slot:
client.delete("my_model")
Rotate your API key at any time — all registered models move automatically:
result = client.rotate_key() # result["new_key"] → "vb_..." # Your old key stops working immediately.
Register models, run explanations, rotate keys — all from Python.
Python 3.9+ · License: AGPL v3 (commercial licence available).
pip install cipherexplain
(PyPI package publishing in progress — source available on GitHub below.)
For client-side CKKS encryption (fhe_mode='ckks') add the [fhe] extra:
pip install 'cipherexplain[fhe]'
from cipherexplain_sdk import CipherExplainClient, from_weights
client = CipherExplainClient(api_key="vb_...")
# Gradient-boosted trees
client.register_xgboost("my_xgb", xgb_model, feature_names)
client.register_lightgbm("my_lgb", lgb_model, feature_names)
# sklearn Pipeline — scaler auto-unwrapped
client.register_pipeline("my_pipe", pipe,
feature_names, X_train=X)
# MLP (CKKS-evaluated)
client.register_mlp("my_mlp", mlp, feature_names, X_train=X)
# Raw weights (TF, JAX, statsmodels, R, ...)
spec = from_weights(coef, intercept, "my_linear",
feature_names, classes=[0, 1])
client.register(spec)
# Explain with full FHE + optional DP noise
result = client.explain_raw("my_mlp", x_raw,
fhe_mode="execute", apply_dp=True)
print(result["shap_values"])
print(result["fhe_mode_used"]) # "ckks_engine"
print(result["model_version_id"]) # audit trail
# Async batch (compliance workflows)
job = client.explain_batch([x1, x2, x3],
model_id="my_mlp",
webhook_url="https://you/hook")
status = client.explain_batch_status(job["job_id"])
# DP budget
print(client.usage_dp()) # {"epsilon_remaining": 87.3, ...}
# Key rotation — old key deactivated immediately
new = client.rotate_key()
print(new["new_key"]) # save this
Interactive reference — try every endpoint directly in your browser.
Try every endpoint live. Paste your API key once and run requests directly from the browser.
Clean read-only API reference. Best for sharing with your team or reading offline.
vb_... key into the X-API-Key fieldfhe_mode='ckks' enables full CKKS homomorphic encryption. Your input is encrypted on your machine before transmission. The server evaluates the model and computes SHAP values without decrypting at any point. Results are returned encrypted and decrypted locally by your SDK. Logistic regression: p50 72.2s / p95 74.1s / p99 75.9s end-to-end on a prod 2 vCPU x86 cloud server (d=50, K=390, measured 50-call soak 2026-04-30). MLP (ReLU): 73s per explanation on a prod 2 vCPU x86 cloud server via the diagonal-encoded coalition-packed path (d=50, K=390). LP-optimal degree-27 polynomial activation. XGBoost / LightGBM / DecisionTree (opt-in enable_fhe_octe=True): full-FHE path product, ~70s per explanation on a 2 vCPU AMD cloud server (T=100, D=4, K=40, measured). For longer-running compliance workflows, use POST /explain/batch — async webhook delivery.
Cryptographic integrity (LIVE): every fhe_mode='ckks' response carries an X-Binding-Required: 1 header plus a binding_proof dict — a Pedersen commitment to the canonical ciphertext bytes (192 hex) and a Schnorr Σ-IPA witness (320 hex) over BLS12-381. The SDK's verify_binding_proof rejects any response a malicious operator tampered with; combined with FreiKZG integrity over the regression step (φ = M · y, soundness 2^−249), this means a server cannot fabricate explanations even if it controls every byte of the network path. Cryptographic soundness 2^−128 against classical adversaries. End-to-end latency overhead measured at +0.7% on the 5-call prod soak.
Three levels of privacy. Pick the one that matches your data-handling contract.
Fast, no encryption. Your features travel over HTTPS and the server computes SHAP in plaintext.
client.explain_raw(
"my_model", x_raw)
Full CKKS homomorphic encryption. Your input is encrypted on your machine; the server never sees plaintext.
client.explain_raw(
"my_model", x_raw,
fhe_mode="execute")
Encrypted compute plus a clipped Gaussian (ε,δ)-DP mechanism on the published SHAP vector. Reduces the leakage from repeated queries against the same subject under the documented neighbouring relation.
client.explain_raw(
"my_model", x_raw,
fhe_mode="execute",
apply_dp=True)
DP-SHAP applies a clipped Gaussian mechanism to the published SHAP vector, providing (ε, δ)-differential privacy with respect to a documented neighbouring relation on the client input (l1_fractional, l1_single, or linf). The L₂ sensitivity Δ₂ is derived per model class — closed-form for logistic regression, leaf-bound for RandomForest / GradientBoosting; other model families fall back to plaintext SHAP. Production composition is linear in ε (a stricter accountant than zCDP); a zCDP PrivacyAccountant is available as a research utility. The mechanism protects the published attribution against input-reconstruction attacks; it does not provide DP for the underlying training data. Each apply_dp=True call consumes from a per-key daily ε budget.
GET /usage/dp
{
"epsilon_budget_daily": 100.0,
"epsilon_spent_today": 12.7,
"epsilon_remaining": 87.3,
"resets_at": "2026-04-19T00:00:00Z"
}
Built for regulated deployments — credit, insurance, healthcare, hiring. Annual contracts, DPA, SLA, and on-prem available.
enable_fhe_octe=True); priced by committed volumeOEM / PATENT LICENSE — CUSTOM
For FHE platforms, GRC vendors, and embedded deployments. Volume pricing, field-of-use terms, and sub-licensable patent grants (PCT/IB2026/053405) negotiated directly.
FREE — £0 forever
WHAT'S MISSING
Free and Developer tiers are for evaluation and non-production workloads. Regulated deployments (banks, insurers, health, hiring) require the Business or Enterprise plans for signed DPA, Art. 13 attestation, SLA, and audit evidence.
Business and Enterprise contracts include committed monthly volume with overage pricing negotiated at signing. Standard rates:
Developer plan customers can enable per-call overage (£0.08/SHAP) via POST /account/payg/enable with spend cap. Not recommended for regulated production workloads — use Business or Enterprise instead.
MANAGE YOUR ACCOUNT
Enter your API key to manage billing, cancel, enable PAYG, or check usage — all automated, no emails needed.
Manage subscription: cancel, update card, download invoices · No cancellation fees · Access continues to end of billing period
Already running AI on encrypted data? CipherExplain plugs in as the SHAP layer.
Your regulated customers need explainable encrypted predictions. CipherExplain plugs in as the SHAP layer that works under CKKS — no plaintext detour, no second key.
Already have a trained model? Register it in two lines of Python — linear classifiers (sklearn or raw weights from any framework), tree ensembles (RandomForest, GradientBoosting, DecisionTree), gradient-boosted trees (XGBoost, LightGBM), CatBoost (plaintext TreeSHAP), or MLP (ReLU). sklearn Pipeline with an embedded scaler is auto-unwrapped. Only numbers travel over HTTPS.
pip install cipherexplain
Filed under the Patent Cooperation Treaty (PCT) with priority date April 7, 2026. International search report expected August 2027. National phase entry deadline October 2028. Coverage spans 150+ countries via PCT.
Homomorphic Encrypted Model Explanation: Computing SHAP Values Under FHE
Free tier is instant — verify your email and start in 30 seconds. Paid tiers via Stripe. Enterprise contracts available.
Enterprise licensing, NDA evaluations, and custom model adapter development also available — use the enterprise form below or email b@vaultbytes.com.
For procurement, vendor onboarding, NDA evaluations, design partnerships, and pilots. Replies within one business day.