[PATENT PENDING]

CipherExplain

./explain --fhe --audit-ready

Homomorphic Encrypted SHAP explanations. Patent-pending technology that computes SHAP feature attributions entirely under Fully Homomorphic Encryption — with fhe_mode='ckks', the server never sees your plaintext data. Register your own models via a secure JSON API — no training data leaves your environment.

[Encrypted SHAP]· [Pricing]· [Contact]
PCT/IB2026/053405 Custom Model API LR + SVM + DT + RF + GB + MLP + XGBoost + LightGBM + CatBoost SDK DP-SHAP

The FHE Production Gap

Encrypted AI blocks explainability — and regulators now require both.

Encrypted AI cannot explain its decisions

GDPR, HIPAA-aligned programs, and sectoral controls often require strong safeguards for personal data, and encryption is one important technical measure. The EU AI Act (Art. 13 & 86) requires transparency and explanations for high-risk AI systems, with obligations phasing in from August 2026 through 2027. Feature-level explanations are one established approach to meet these transparency requirements. Combining the two is operationally hard: most explainability methods need plaintext access, which leaves banks, healthcare networks, and hiring platforms juggling pseudonymisation, on-prem builds, and contractual controls to bridge the gap.

Patent-Pending Invention

Reproducible reference prototype and validated benchmarks.

Homomorphic Encrypted SHAP

Compute feature attribution explanations entirely under FHE when using fhe_mode='ckks'.

Threat model. CipherExplain protects your input features against an honest-but-curious server. The server processes ciphertexts and returns encrypted results; your plaintext data never leaves your machine. This deployment assumes the server does not gain access to decryptions of ciphertexts it produced (no decryption oracle), consistent with standard CKKS usage patterns.

  • Compressed coalition sampling: O(d log d) instead of 2^d evaluations
  • Input encrypted on your machine — server operates on ciphertexts only
  • SIMD slot packing: 390 coalitions evaluated in 3 ciphertext ops (logistic regression) / 2 ciphertext ops (MLP with packing)
  • 128-bit CKKS security, N=2^15 (logistic regression) / N=2^16 (MLP), LP-optimal degree-27 ReLU approximation
  • Optional DP-SHAP — calibrated Gaussian noise over per-key daily ε budget

⚠️ CKKS encrypted mode by model family

  • Linear (logistic regression, SVM, MLP) — fully encrypted end-to-end.
  • XGBoost / LightGBM / DecisionTreefull-FHE path product via opt-in enable_fhe_octe=True. Sign gate, coalition composition, and path traversal stay encrypted; server never learns which leaf was reached. Validated at T ≤ 100, D ≤ 4 on a 2 vCPU AMD cloud server (~70s per explanation).
  • RandomForest / GradientBoosting — opt-in via enable_fhe_octe=true on POST /models/register. FHE sign gate + encrypted coalition composition; path product evaluated in plaintext after decrypt (reveals which leaf was selected, not input features). Requires StandardScaler in the spec.
  • CatBoost — hosted plaintext TreeSHAP today; FHE circuit is research-stage and not yet exposed in the production API.
SHAP MAE 0.018 · FHE noise 1.35e-04 · axiom error 1.11e-16
Custom Model API SDK Key Rotation

Bring Your Own Model

Register any trained sklearn model with the API — weights only, no training data, no pickle. Your data never leaves your environment. With fhe_mode='ckks', SHAP explanations run server-side under FHE and results return to you encrypted.

  • Supported: logistic regression, linear SVM, RandomForest, GradientBoosting, DecisionTree, MLP (ReLU, 2–3 hidden layers), XGBoost, LightGBM, and CatBoost binary classifiers. Binary classification only.
  • Tree ensemblesRandomForestClassifier, GradientBoostingClassifier, DecisionTreeClassifier and regressor variants via high-level register_* helpers. SHAP error 0.05% (P4, 19/19 tests pass)
  • Gradient-boosted treesregister_xgboost() and register_lightgbm(); binary classifiers; full-FHE P26.2-PPK pipeline via enable_fhe_octe=True (T ≤ 100, D ≤ 4). ~70s per explanation on a 2 vCPU AMD cloud server (K=40 stratified, measured).
  • MLP (ReLU)nn.Sequential / MLPClassifier via register_mlp(); CKKS-evaluated under LP-optimal degree-27 ReLU approximation. 73s per explanation measured on a prod 2 vCPU x86 cloud server via the diagonal-encoded coalition-packed path (d=50, K=390). Opt-in linear_surrogate=true (per-request) routes to a rank-1 Jacobian linear surrogate at ~7s per explanation with reported error_bound=0.15 (measured L∞ 0.062) — explicit accuracy / latency trade for triage workflows.
  • CatBoost oblivious treesregister_catboost(); binary classification; hosted plaintext TreeSHAP. FHE under CKKS exists as a research circuit but is not yet wired into the production fhe_mode='ckks' path.
  • PyTorch MLPregister_pytorch_mlp(); nn.Sequential and custom nn.Module; ReLU only; extracts weights client-side, routes through PANCE FHE path.
  • sklearn Pipeline auto-unwrapregister_pipeline() strips embedded StandardScaler and registers the wrapped estimator
  • Linear classifiers: sklearn LogisticRegression, LinearSVC, and any object with coef_ / intercept_ via from_weights()
  • Up to 512 features
  • Optional StandardScaler embedded in spec — raw inputs auto-scaled on /explain_raw
  • Per-key namespaced registry — tenants fully isolated
  • Models persist across server restarts (SQLite-backed)
  • Every explanation response carries model_version_id for audit trails — list versions via GET /models/{id}/versions
  • FreiKZG verifiable SHAP — every /explain response includes a cryptographic proof that φ = M · y (soundness 2^−249). Verify client-side in ~4ms. GET /models/{id}/commitment returns the KZG commitment. Live in prod (CE_FREI_KZG_ENABLED=true).
  • vFHE Bulletproofs binding — every /explain response carries a Pedersen commitment to the canonical ciphertext bytes plus a Schnorr Σ-IPA proof. The SDK's Layer 7 verifier re-derives the FS transcript and rejects any response the server tampered with. Composes with FreiKZG so a malicious operator cannot fabricate φ even if they replace the regression matrix M. Soundness 2^−128 against classical adversaries. Live in prod (CE_VFHE=1, p50 latency overhead +0.7%). See Corollary 3 (LCV-full-FHE).
  • Key rotation migrates all registered models automatically
  • Slot quotas: 1 model (free) · 10 (developer) · unlimited (enterprise)
Weights-only model registration — no training data, no pickle files, no code execution. Supports: logistic regression, linear SVC, RandomForest, GradientBoosting, DecisionTree, MLP (ReLU, sklearn + PyTorch), XGBoost, LightGBM, CatBoost.
JSON-only spec OMS Merkle attestation FreiKZG commitment vFHE binding (live) Per-tenant namespace

Why It's Safe to Register Your Model

Five layers a procurement officer can independently verify before signing an MSA. Each is enforced by code in production today, not promised on a roadmap.

  1. No code execution at registration. The /models/register endpoint accepts a JSON-only spec — coefficients, intercepts, tree node arrays, MLP weight matrices. No pickle, no joblib, no __reduce__. The Pydantic schema rejects unknown keys; the engine deserialises only the typed fields. A malicious customer can't smuggle code; the server can't unpickle a back-door.
  2. Cryptographic Merkle attestation of weights. Every registration computes an OMS v1.0 (OpenSSF Model Signing) Merkle root over your tensors. The root is returned in the registration response and re-fetchable via GET /models/{id}/attestation. Every /explain response carries per-layer inclusion proofs against that root — the SDK's Layer 6 verifier rejects any response that touched weights you did not register. Reference: Garg et al., "Experimenting with Zero-Knowledge Proofs of Training," CCS 2023, eprint 2023/1345.
  3. Public commitment to the regression matrix. The Kernel-SHAP regression matrix M is public (depends only on (d, c), not on your data). At registration we publish a KZG commitment to M; GET /models/{id}/commitment returns it. The SDK's Layer 5 verifies every /explain's FreiKZG proof against that cached commitment. Soundness 2^−249 over BLS12-381.
  4. vFHE Bulletproofs binding (live). Every FHE-execute response carries a Pedersen commitment to the canonical ciphertext bytes plus a Schnorr Σ-IPA witness. SDK Layer 7 rejects any response a malicious operator tampered with after the encrypted compute. Composes with FreiKZG so even a server that controls every byte of the wire cannot fabricate φ. Soundness 2^−128 against classical adversaries. Latency overhead measured at +0.7% on the live prod soak.
  5. Per-tenant namespace isolation. Models live under sha256(your_X-API-Key) as the namespace key. Cross-tenant lookups return 404, never the wrong customer's model. The key_namespace_prefix in the attestation response confirms the registered model is in your namespace, not another tenant's.

All five layers ship in the SDK at pip install cipherexplain. Registering a model is a 4-line Python call; the SDK fetches and verifies the attestation automatically before returning the model handle. The cryptographic primitives are documented in Sections 4–7 of the v3 paper; reproduction code is available under evaluation NDA.

What Does It Actually Return?

Plain English. No maths required.

ENCRYPTED SHAP — THE INPUT AND OUTPUT

The input: one person, one decision

You send a feature vector — the attributes of the specific case you want explained. These are the same numbers your model used to make its prediction.

{
  "model_id": "loan-risk-v1",
  "features": [35,  55000, 0.3,  1,    8  ]
  //            ↑     ↑     ↑     ↑     ↑
  //           age income debt  new  yrs_emp
}

The output: a number and a breakdown

{
  "prediction":    0.72,
  "base_rate":     0.50,
  "shap_values":   [0.08, 0.18, -0.06, 0.02, 0.00],
  "feature_names": ["age","income","debt","new","yrs"]
}

prediction: 0.72 — the model is 72% confident this applicant will repay. Your application maps this to "Approved" or "Low risk" — the label is your code's job, not ours.

base_rate: 0.50 — the average prediction across all applicants. This is the neutral starting point before any features are considered.

The SHAP values explain the gap from 0.50 to 0.72. Income drove most of it (+0.18). Debt ratio pulled it back (−0.06).

HOW TO READ SHAP VALUES

Each SHAP value is a signed number. Positive = pushed the prediction up. Negative = pushed it down. The size tells you how much relative to the other features.

income  +0.18

Main approval driver. Income was the single biggest reason the model said yes.

age      +0.08

Moderate positive signal. Added some confidence but was not the deciding factor.

debt     −0.06

Worked against approval. Still approved overall, but the debt ratio reduced confidence.

new      +0.02

Near-zero impact. Being a new customer barely changed this prediction either way.

SAME API — ANY DOMAIN

Healthcare — disease risk

Your features are whatever your model was trained on.

// Input
"features": [120,  7.2,  28.5, 1,       55 ]
//            ↑     ↑     ↑     ↑         ↑
//           bp  glucose  bmi  diabetic  age

// Output
"prediction": 0.87   // → your app shows "High risk"
"shap_values": [0.05, 0.31, 0.08, 0.12, -0.09]
//              bp   glucose bmi  diab   age

Finance — fraud detection

The features are transaction and session attributes.

// Input
"features": [249.99, 2,    44,      1,    0  ]
//            ↑       ↑     ↑         ↑     ↑
//          amount  hour  country  new  vpn_flag

// Output
"prediction": 0.94   // → your app flags "Suspected fraud"
"shap_values": [0.02, 0.38, 0.11, 0.08, -0.05]
//             amt   hour  cty   new   vpn

The API works identically across all domains. The features, label names, and business logic all live in your application. CipherExplain handles the encrypted computation and returns numbers.

FHE MODE TRANSPARENCY

Every explanation response declares which FHE mode actually ran — so you always know whether your data was encrypted end-to-end or fell back to plaintext.

{
  "fhe_mode_requested": "execute",
  "fhe_mode_used":      "ckks_engine",
  "fhe_mode_reason":    null,
  "model_version_id":   "mv_01HXYZ..."
}

Validation

Every number below is reproducible from the working prototype.

0.018
SHAP MAE vs ground-truth KernelSHAP (d=50, K=390, Bernstein-bounded)
1.11e-16
Efficiency axiom error (machine epsilon)
130×
SIMD reduction at d=50 (390 → 3 ciphertexts)
1.35e-04
FHE vs plaintext SHAP max difference
< 76s
Logistic regression — end-to-end CKKS latency (d=50, K=390 stratified-antithetic). Measured 50-call prod soak on a 2 vCPU x86 cloud server 2026-04-30: p50 72.2s, p95 74.1s, p99 75.9s.
73s
MLP (ReLU) — measured CKKS latency per explanation on a prod 2 vCPU x86 cloud server via the diagonal-encoded coalition-packed path (d=50, K=390). LP-optimal degree-27 ReLU.
1.88%
SHAP relative error on breast_cancer (LP-optimal degree-27, down from 5.04% Chebyshev)
128-bit
CKKS security (HEStd_128_classic) + 2^−128 binding soundness (Bulletproofs Σ-IPA, live in prod via CE_VFHE=1, +0.7% latency overhead).
0.05%
Tree ensemble SHAP error — RandomForest / GradientBoosting (19/19 tests pass)
~70s
XGBoost / LightGBM / DecisionTree — full-FHE path-product latency per explanation (2 vCPU AMD cloud server, T=100 D=4 K=40, measured).

Full reproduction package and additional benchmarks available under evaluation NDA.

Technical Methodology

Specific parameters for every benchmark on this page. Reproducible.

CKKS benchmarks

All CKKS SHAP numbers on this page use:

  • OpenFHE 1.2.0, ring dimension N = 16384
  • Scaling factor 2^40, security level 128-bit (HEStd_128_classic)
  • MLP timing on a 2 vCPU x86 cloud server, degree-27 LP-optimal ReLU approximation
  • End-to-end wall-clock latency, measured not projected (unless stated)

Security level is a parameter-set estimate against the OpenFHE 1.2.0 HEStd_128_classic profile. Production deployments should re-validate against their chosen HE estimator (e.g. lattice-estimator) and library version.

How to Use

Encrypted SHAP explanations — hosted API with a single key.

STEP 1 — GET YOUR KEY

Sign up instantly — no waitlist. Enter your work email, verify with a 6-digit code, and your key arrives immediately.

Get a free key (3 runs/month) or upgrade to Developer (£299/mo).

vb_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
STEP 2 — CIPHEREXPLAIN API (ENCRYPTED SHAP)

Python

Load the demo model, then call /explain_raw with raw (unscaled) values:

import requests

BASE = "https://cipherexplain.vaultbytes.com"
HDR  = {"X-API-Key": "vb_..."}

# One-time: load the built-in demo credit model
requests.post(f"{BASE}/startup", headers=HDR)

# Send the raw feature values for one person
r = requests.post(f"{BASE}/explain_raw", headers=HDR,
  json={
    "model_id": "credit_model",
    "features": [38,  13,    0,       0,      40 ]
    #             ↑    ↑      ↑        ↑        ↑
    #            age  edu  marital  occup  hrs/week
  }
)
data = r.json()

# What comes back:
# data["prediction"]    → 0.74   (74% probability — your app maps to a label)
# data["base_rate"]     → 0.50   (average across all cases — the neutral baseline)
# data["shap_values"]   → [0.12, -0.31, 0.05, 0.03, 0.09]
# data["feature_names"] → ["age", "education-num", "marital", "occup", "hours"]
#
# Reading the SHAP values:
#   education-num: -0.31 → biggest factor, pushed prediction DOWN
#   age:            0.12 → pushed it up
#   hours/week:     0.09 → positive signal
#   occupation:     0.05 → small positive
#   marital:        0.03 → almost no effect

curl

curl -s -X POST \
  https://cipherexplain.vaultbytes.com/explain_raw \
  -H "X-API-Key: vb_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model_id": "credit_model",
    "features": [38, 13, 0, 0, 40]
  }' | python3 -m json.tool

All endpoints

POST /startup                  → load demo credit model
GET  /models                   → list your registered models
POST /models/register          → register your own model
GET  /models/{id}/versions     → list model versions (audit)
GET  /models/{id}/commitment   → KZG commitment for FreiKZG verify
DELETE /models/{id}            → remove a model
POST /explain                  → SHAP (pre-scaled features)
POST /explain_raw              → SHAP (raw values, auto-scaled)
POST /explain/batch            → async batch — webhook delivery
GET  /explain/batch/{job_id}   → poll batch job status
POST /report                   → generate PDF audit report
POST /keys/rotate              → rotate your API key
GET  /usage                    → quota used this month
GET  /usage/dp                 → remaining DP privacy budget (ε)
GET  /health                   → status (no key needed)
STEP 3 — BRING YOUR OWN MODEL (OPTIONAL)

Register any classifier

Your model and data stay local. Only trained weights (numbers) are sent — no training data, no pickle files, no arbitrary code.

pip install cipherexplain

from cipherexplain_sdk import CipherExplainClient
client = CipherExplainClient(api_key="vb_...")

# --- sklearn Pipeline auto-unwrap (embedded scaler stripped) ---
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
pipe = Pipeline([("scaler", StandardScaler()),
                 ("clf",    LogisticRegression())]).fit(X, y)
client.register_pipeline("my_model", pipe, feature_names, X_train=X)

# --- XGBoost binary classifier ---
import xgboost as xgb
booster = xgb.XGBClassifier().fit(X, y)
client.register_xgboost("my_xgb", booster, feature_names)

# --- LightGBM binary classifier ---
import lightgbm as lgb
gbm = lgb.LGBMClassifier().fit(X, y)
client.register_lightgbm("my_lgb", gbm, feature_names)

# --- MLP (ReLU) — CKKS-evaluated ---
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(hidden_layer_sizes=(16, 8)).fit(X, y)
client.register_mlp("my_mlp", mlp, feature_names, X_train=X)

# --- Any other framework — raw weights ---
from cipherexplain_sdk import from_weights
spec = from_weights(coef, intercept, "my_linear",
                    feature_names, classes=[0, 1])
client.register(spec)

# Explain with full FHE + optional DP noise
result = client.explain_raw("my_model", x_raw,
                            fhe_mode="execute", apply_dp=True)
print(result["shap_values"])
print(result["model_version_id"])   # audit pin
print(result["fhe_mode_used"])       # "ckks_engine"
Model family Hosted plaintext SHAP CKKS / FHE mode Notes
Logistic regressionYesYes (default)Production prototype, measured
Linear SVMYesYes (default)Production prototype
MLP (ReLU)YesYes (default)Production prototype, measured on prod 2 vCPU x86 (73s, diagonal-coalition path)
DecisionTreeYesOpt-in (enable_fhe_octe)Full-FHE path product, bounded T/D
XGBoost / LightGBMYesOpt-in (enable_fhe_octe)Full-FHE path product, Enterprise compute
RandomForest / GradientBoostingYesPartial — opt-in via enable_fhe_octe=true on POST /models/registerFHE sign gate + encrypted coalition composition; path product evaluated in plaintext after decrypt. Requires StandardScaler in the spec.
CatBoostYesBusiness / EnterpriseHosted plaintext TreeSHAP via register_catboost() on every tier. Full-FHE oblivious-tree circuit validated (sign-flip-free, axiom 1e-17, SHAP L∞ 0.008); deployed per-customer on a dedicated host for Business and Enterprise contracts.

Model slots & key rotation

Each API key has a model slot quota by tier:

  • Free — 1 model
  • Developer — 10 models
  • Enterprise — no limit

Delete a model to free its slot:

client.delete("my_model")

Rotate your API key at any time — all registered models move automatically:

result = client.rotate_key()
# result["new_key"] → "vb_..."
# Your old key stops working immediately.

Python SDK

Register models, run explanations, rotate keys — all from Python.

INSTALL

Python 3.9+ · License: AGPL v3 (commercial licence available).

pip install cipherexplain

(PyPI package publishing in progress — source available on GitHub below.)

For client-side CKKS encryption (fhe_mode='ckks') add the [fhe] extra:

pip install 'cipherexplain[fhe]'
QUICK START
from cipherexplain_sdk import CipherExplainClient, from_weights

client = CipherExplainClient(api_key="vb_...")

# Gradient-boosted trees
client.register_xgboost("my_xgb", xgb_model, feature_names)
client.register_lightgbm("my_lgb", lgb_model, feature_names)

# sklearn Pipeline — scaler auto-unwrapped
client.register_pipeline("my_pipe", pipe,
                         feature_names, X_train=X)

# MLP (CKKS-evaluated)
client.register_mlp("my_mlp", mlp, feature_names, X_train=X)

# Raw weights (TF, JAX, statsmodels, R, ...)
spec = from_weights(coef, intercept, "my_linear",
                    feature_names, classes=[0, 1])
client.register(spec)

# Explain with full FHE + optional DP noise
result = client.explain_raw("my_mlp", x_raw,
                            fhe_mode="execute", apply_dp=True)
print(result["shap_values"])
print(result["fhe_mode_used"])     # "ckks_engine"
print(result["model_version_id"])  # audit trail

# Async batch (compliance workflows)
job = client.explain_batch([x1, x2, x3],
                           model_id="my_mlp",
                           webhook_url="https://you/hook")
status = client.explain_batch_status(job["job_id"])

# DP budget
print(client.usage_dp())   # {"epsilon_remaining": 87.3, ...}

# Key rotation — old key deactivated immediately
new = client.rotate_key()
print(new["new_key"])  # save this

API Docs

Interactive reference — try every endpoint directly in your browser.

AUTHENTICATE IN SWAGGER
  1. Open cipherexplain.vaultbytes.com/docs
  2. Click Authorize (top right, 🔒 icon)
  3. Paste your vb_... key into the X-API-Key field
  4. Click AuthorizeClose
  5. Expand any endpoint and click Try it out
FULL CKKS ENCRYPTION MODE

fhe_mode='ckks' enables full CKKS homomorphic encryption. Your input is encrypted on your machine before transmission. The server evaluates the model and computes SHAP values without decrypting at any point. Results are returned encrypted and decrypted locally by your SDK. Logistic regression: p50 72.2s / p95 74.1s / p99 75.9s end-to-end on a prod 2 vCPU x86 cloud server (d=50, K=390, measured 50-call soak 2026-04-30). MLP (ReLU): 73s per explanation on a prod 2 vCPU x86 cloud server via the diagonal-encoded coalition-packed path (d=50, K=390). LP-optimal degree-27 polynomial activation. XGBoost / LightGBM / DecisionTree (opt-in enable_fhe_octe=True): full-FHE path product, ~70s per explanation on a 2 vCPU AMD cloud server (T=100, D=4, K=40, measured). For longer-running compliance workflows, use POST /explain/batch — async webhook delivery.

Cryptographic integrity (LIVE): every fhe_mode='ckks' response carries an X-Binding-Required: 1 header plus a binding_proof dict — a Pedersen commitment to the canonical ciphertext bytes (192 hex) and a Schnorr Σ-IPA witness (320 hex) over BLS12-381. The SDK's verify_binding_proof rejects any response a malicious operator tampered with; combined with FreiKZG integrity over the regression step (φ = M · y, soundness 2^−249), this means a server cannot fabricate explanations even if it controls every byte of the network path. Cryptographic soundness 2^−128 against classical adversaries. End-to-end latency overhead measured at +0.7% on the 5-call prod soak.

Privacy Controls

Three levels of privacy. Pick the one that matches your data-handling contract.

LEVEL 1

Plaintext SHAP

Fast, no encryption. Your features travel over HTTPS and the server computes SHAP in plaintext.

client.explain_raw(
    "my_model", x_raw)
LEVEL 2

FHE SHAP

Full CKKS homomorphic encryption. Your input is encrypted on your machine; the server never sees plaintext.

client.explain_raw(
    "my_model", x_raw,
    fhe_mode="execute")
LEVEL 3 — STRONGEST

FHE SHAP + DP

Encrypted compute plus a clipped Gaussian (ε,δ)-DP mechanism on the published SHAP vector. Reduces the leakage from repeated queries against the same subject under the documented neighbouring relation.

client.explain_raw(
    "my_model", x_raw,
    fhe_mode="execute",
    apply_dp=True)
DP PRIVACY BUDGET

DP-SHAP applies a clipped Gaussian mechanism to the published SHAP vector, providing (ε, δ)-differential privacy with respect to a documented neighbouring relation on the client input (l1_fractional, l1_single, or linf). The L₂ sensitivity Δ₂ is derived per model class — closed-form for logistic regression, leaf-bound for RandomForest / GradientBoosting; other model families fall back to plaintext SHAP. Production composition is linear in ε (a stricter accountant than zCDP); a zCDP PrivacyAccountant is available as a research utility. The mechanism protects the published attribution against input-reconstruction attacks; it does not provide DP for the underlying training data. Each apply_dp=True call consumes from a per-key daily ε budget.

GET /usage/dp

{
  "epsilon_budget_daily": 100.0,
  "epsilon_spent_today":  12.7,
  "epsilon_remaining":    87.3,
  "resets_at":            "2026-04-19T00:00:00Z"
}

Pricing

Built for regulated deployments — credit, insurance, healthcare, hiring. Annual contracts, DPA, SLA, and on-prem available.

EU AI ACT ART. 13 & 86 — PHASED 2026-2027 GDPR ART. 28 DPA SR 11-7 / PRA SS1/23 SOC 2 TYPE II — IN PROGRESS ISO 27001 — IN PROGRESS PCT FILED APR 2026
DEVELOPER
£299
/ month
  • 1,000 SHAP calls / month
  • 10 custom model slots
  • API docs & SDK access
  • PDF audit reports
  • Email support
  • For ML teams evaluating CipherExplain
FOR REGULATED BUYERS
BUSINESS
£25,000
/ year · annual contract
  • 50,000 standard SHAP calls / month
  • Fair-use CKKS quota for linear / MLP models
  • Tree-FHE not included — Enterprise only
  • 50 custom model slots
  • EU hosting (EU data centre)
  • Signed DPA (GDPR Art. 28)
  • Model-version audit trail
  • DP privacy budget controls
  • Priority email support
  • Security questionnaire pack
  • Quarterly benchmark report
ENTERPRISE
From £75k
/ year · multi-year available
  • Custom CKKS quota
  • Dedicated FHE compute pool
  • Tree-FHE: full-FHE for XGBoost / LightGBM / DT; partial-FHE for RF / GB (opt-in enable_fhe_octe=True); priced by committed volume
  • Custom model adapters
  • SSO / SAML
  • VPC / on-prem deployment
  • SLA + named support engineer
  • Security review package
  • Optional private benchmark reproduction
  • Commercial (non-AGPL) licence
  • Patent licensing (PCT/IB2026/053405)

OEM / PATENT LICENSE — CUSTOM

For FHE platforms, GRC vendors, and embedded deployments. Volume pricing, field-of-use terms, and sub-licensable patent grants (PCT/IB2026/053405) negotiated directly.

[oem_enquiry]
DEVELOPERS & RESEARCHERS — free tier available

FREE — £0 forever

  • 50 SHAP calls / month
  • 1 model slot
  • Community support

WHAT'S MISSING

Free and Developer tiers are for evaluation and non-production workloads. Regulated deployments (banks, insurers, health, hiring) require the Business or Enterprise plans for signed DPA, Art. 13 attestation, SLA, and audit evidence.

OVERAGE & COMMITTED VOLUME PRICING

Business and Enterprise contracts include committed monthly volume with overage pricing negotiated at signing. Standard rates:

  • SHAP explain — from £0.05/call at 100k+/month commitment

Developer plan customers can enable per-call overage (£0.08/SHAP) via POST /account/payg/enable with spend cap. Not recommended for regulated production workloads — use Business or Enterprise instead.

MANAGE YOUR ACCOUNT

Enter your API key to manage billing, cancel, enable PAYG, or check usage — all automated, no emails needed.

Manage subscription: cancel, update card, download invoices · No cancellation fees · Access continues to end of billing period

For Builders & Integrators

Already running AI on encrypted data? CipherExplain plugs in as the SHAP layer.

Enterprise FHE Platforms

Your regulated customers need explainable encrypted predictions. CipherExplain plugs in as the SHAP layer that works under CKKS — no plaintext detour, no second key.

ML Engineers & Data Teams

Already have a trained model? Register it in two lines of Python — linear classifiers (sklearn or raw weights from any framework), tree ensembles (RandomForest, GradientBoosting, DecisionTree), gradient-boosted trees (XGBoost, LightGBM), CatBoost (plaintext TreeSHAP), or MLP (ReLU). sklearn Pipeline with an embedded scaler is auto-unwrapped. Only numbers travel over HTTPS.

pip install cipherexplain

Patent Status

Filed under the Patent Cooperation Treaty (PCT) with priority date April 7, 2026. International search report expected August 2027. National phase entry deadline October 2028. Coverage spans 150+ countries via PCT.

PCT/IB2026/053405

Homomorphic Encrypted Model Explanation: Computing SHAP Values Under FHE

Get Your API Key

Free tier is instant — verify your email and start in 30 seconds. Paid tiers via Stripe. Enterprise contracts available.

Enterprise licensing, NDA evaluations, and custom model adapter development also available — use the enterprise form below or email b@vaultbytes.com.

Enterprise Inquiry

For procurement, vendor onboarding, NDA evaluations, design partnerships, and pilots. Replies within one business day.

Submissions land in Netlify Forms and are forwarded to b@vaultbytes.com. We do not use this data for marketing. See Privacy.