./validate --fhe --explain --audit-ready
FHE Validation and Explainability. Two patent-pending technologies that make Fully Homomorphic Encryption production-ready: adversarial precision testing and encrypted SHAP explanations — with fhe_mode='ckks', the server never sees your data. Register your own models via a secure JSON API — no training data leaves your environment.
Two unsolved problems block Fully Homomorphic Encryption from reaching regulated production deployments.
Homomorphic schemes such as CKKS, BGV, and BFV accumulate noise input by input. Bugs occupy roughly one part in 10^5 of the input space, so uniform random testing finds nothing. Production deployments need an automated CI/CD validation gate that does not exist anywhere in the FHE toolchain today.
GDPR and HIPAA require encryption of personal data. The EU AI Act, effective August 2026, requires feature-level explanations for high-risk AI. These two requirements collide head-on: explainability methods need plaintext access, encryption prevents plaintext access, and banks, healthcare networks, and hiring platforms have no compliant solution.
Both inventions ship with reproducible reference prototypes and validated benchmarks.
Adversarial search that finds CKKS precision bugs random testing misses entirely.
Compute feature attribution explanations entirely under FHE when using fhe_mode='ckks'.
Register any trained sklearn model with the API — weights only, no training data, no pickle. Your data never leaves your environment. With fhe_mode='ckks', SHAP explanations run server-side under FHE and results return to you encrypted.
from_weights()LogisticRegression, LinearSVC, and any object with coef_ / intercept_nn.Linear or pure-linear nn.Sequentialfrom_weights(coef, intercept, ...)StandardScaler embedded in spec — raw inputs auto-scaled on /explain_rawPlain English. No maths required.
You send a feature vector — the attributes of the specific case you want explained. These are the same numbers your model used to make its prediction.
{
"model_id": "loan-risk-v1",
"features": [35, 55000, 0.3, 1, 8 ]
// ↑ ↑ ↑ ↑ ↑
// age income debt new yrs_emp
}
{
"prediction": 0.72,
"base_rate": 0.50,
"shap_values": [0.08, 0.18, -0.06, 0.02, 0.00],
"feature_names": ["age","income","debt","new","yrs"]
}
prediction: 0.72 — the model is 72% confident this applicant will repay. Your application maps this to "Approved" or "Low risk" — the label is your code's job, not ours.
base_rate: 0.50 — the average prediction across all applicants. This is the neutral starting point before any features are considered.
The SHAP values explain the gap from 0.50 to 0.72. Income drove most of it (+0.18). Debt ratio pulled it back (−0.06).
Each SHAP value is a signed number. Positive = pushed the prediction up. Negative = pushed it down. The size tells you how much relative to the other features.
Main approval driver. Income was the single biggest reason the model said yes.
Moderate positive signal. Added some confidence but was not the deciding factor.
Worked against approval. Still approved overall, but the debt ratio reduced confidence.
Near-zero impact. Being a new customer barely changed this prediction either way.
Your features are whatever your model was trained on.
// Input "features": [120, 7.2, 28.5, 1, 55 ] // ↑ ↑ ↑ ↑ ↑ // bp glucose bmi diabetic age // Output "prediction": 0.87 // → your app shows "High risk" "shap_values": [0.05, 0.31, 0.08, 0.12, -0.09] // bp glucose bmi diab age
The features are transaction and session attributes.
// Input "features": [249.99, 2, 44, 1, 0 ] // ↑ ↑ ↑ ↑ ↑ // amount hour country new vpn_flag // Output "prediction": 0.94 // → your app flags "Suspected fraud" "shap_values": [0.02, 0.38, 0.11, 0.08, -0.05] // amt hour cty new vpn
The API works identically across all domains. The features, label names, and business logic all live in your application. CipherExplain handles the encrypted computation and returns numbers.
The oracle ran its adversarial search and could not find any inputs where your FHE model diverges beyond the threshold you set.
// HTTP 200 — exit code 0 in CI
{
"verdict": "PASS",
"max_error": 0.0003, // largest gap found
"threshold": 0.01, // your limit
"tests_run": 500,
"time_seconds": 0.04
}
// In plain English:
// "We tried 500 adversarial inputs.
// The worst difference we found was 0.03%.
// Your FHE circuit matches the original model."
The oracle found at least one input where the encrypted model produces a significantly wrong answer. The response tells you exactly where and how bad.
// HTTP 200 — exit code 1 in CI
{
"verdict": "FAIL",
"max_error": 0.491, // 49% wrong — catastrophic
"threshold": 0.01,
"worst_input": [1.72, 2.14, 1.42, ...],
"plaintext_out": 0.893, // what it should be
"fhe_out": 0.402, // what it returned
"distance_from_center": 0.59
}
// In plain English:
// "At input [1.72, 2.14, ...] your FHE model
// returned 0.40 but should have returned 0.89.
// Fix your noise budget or modulus schedule
// near this input region before deploying."
If your model predicts between more than two categories, the API returns a probability for each class:
// e.g. diagnosing between three conditions "predictions": [0.05, 0.87, 0.08] "dominant_class": 1 // Your application maps index → label: // 0 → "Healthy" // 1 → "Type 2 Diabetes" ← predicted // 2 → "Pre-diabetic"
Every number below is reproducible from the working prototype.
Every number above is reproducible from the prototype. Validation suites and benchmarks are available under NDA.
Two products. Both work with a single API key.
Sign up instantly — no waitlist. Enter your work email, verify with a 6-digit code, and your key arrives immediately.
→ Get a free key (3 runs/month) or upgrade to Developer (£299/mo).
vb_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
When you encrypt a model with FHE (Fully Homomorphic Encryption), the encrypted version should produce the same outputs as the original model. In practice, encryption introduces tiny rounding errors — and occasionally those errors are catastrophically large in specific regions of the input space. Standard random testing never finds these bugs because they occupy roughly 1 in 100,000 possible inputs.
Two Python functions: your original model and its FHE-compiled version. The Oracle calls both with the same inputs and measures where they disagree.
The largest error found is below your threshold (e.g. 0.01). Your CI pipeline gets exit code 0. Safe to deploy.
A divergence was found. Exit code 1 + a PDF report showing exactly which inputs triggered it and by how much. Block the merge, fix the circuit.
Create .github/workflows/fhe-check.yml in your repo:
name: FHE Precision Test
on: [push, pull_request]
jobs:
fhe-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run FHE Oracle
run: |
curl -X POST \
https://cipherexplain.vaultbytes.com/oracle/run \
-H "X-API-Key: ${{ secrets.VAULTBYTES_API_KEY }}" \
-F "circuit=@src/my_circuit.py" \
--fail-with-body
# Exit 0 = PASS, Exit 1 = BUG FOUND (see report)
Three required exports in src/my_circuit.py:
INPUT_DIM = 10 # how many input features your model takes
def plaintext(x):
# Your original (unencrypted) model
# This is the ground truth
return float(model.predict(x))
def fhe_simulated(x):
# Your FHE-compiled model
# This is what we're testing
return float(fhe_model.run(x))
# The Oracle calls both with the same inputs
# and finds the worst-case difference between them.
Load the demo model, then call /explain_raw with raw (unscaled) values:
import requests
BASE = "https://cipherexplain.vaultbytes.com"
HDR = {"X-API-Key": "vb_..."}
# One-time: load the built-in demo credit model
requests.post(f"{BASE}/startup", headers=HDR)
# Send the raw feature values for one person
r = requests.post(f"{BASE}/explain_raw", headers=HDR,
json={
"model_id": "credit_model",
"features": [38, 13, 0, 0, 40 ]
# ↑ ↑ ↑ ↑ ↑
# age edu marital occup hrs/week
}
)
data = r.json()
# What comes back:
# data["prediction"] → 0.74 (74% probability — your app maps to a label)
# data["base_rate"] → 0.50 (average across all cases — the neutral baseline)
# data["shap_values"] → [0.12, -0.31, 0.05, 0.03, 0.09]
# data["feature_names"] → ["age", "education-num", "marital", "occup", "hours"]
#
# Reading the SHAP values:
# education-num: -0.31 → biggest factor, pushed prediction DOWN
# age: 0.12 → pushed it up
# hours/week: 0.09 → positive signal
# occupation: 0.05 → small positive
# marital: 0.03 → almost no effect
curl -s -X POST \
https://cipherexplain.vaultbytes.com/explain_raw \
-H "X-API-Key: vb_..." \
-H "Content-Type: application/json" \
-d '{
"model_id": "credit_model",
"features": [38, 13, 0, 0, 40]
}' | python3 -m json.tool
POST /startup → load demo credit model
GET /models → list your registered models
POST /models/register → register your own model
DELETE /models/{id} → remove a model
POST /explain → SHAP (pre-scaled features)
POST /explain_raw → SHAP (raw values, auto-scaled)
POST /report → generate PDF audit report
POST /keys/rotate → rotate your API key
GET /usage → quota used this month
GET /health → status (no key needed)
Your model and data stay local. Only trained weights (numbers) are sent — no training data, no pickle files, no arbitrary code.
pip install cipherexplain
from cipherexplain_sdk import CipherExplainClient, extract_spec, from_weights
# --- sklearn (LogisticRegression, LinearSVC) ---
from sklearn.linear_model import LogisticRegression
spec = extract_spec(LogisticRegression().fit(X_s, y),
"my_lr", feature_names, scaler=scaler)
# --- PyTorch nn.Linear ---
import torch.nn as nn
spec = extract_spec(nn.Linear(n, 1), "my_pt", feature_names)
# --- Any other framework (TF, JAX, statsmodels, R, ...) ---
# Export coef/intercept as arrays, then:
spec = from_weights(coef, intercept, "my_model",
feature_names, classes=[0, 1])
# Register and explain
client = CipherExplainClient(api_key="vb_...")
client.register(spec)
result = client.explain_raw("my_model", x_raw)
print(result["shap_values"])
Each API key has a model slot quota by tier:
Delete a model to free its slot:
client.delete("my_model")
Rotate your API key at any time — all registered models move automatically:
result = client.rotate_key() # result["new_key"] → "vb_..." # Your old key stops working immediately.
Register models, run explanations, rotate keys — all from Python.
pip install cipherexplain
For client-side CKKS encryption (fhe_mode='ckks') add the [fhe] extra:
pip install 'cipherexplain[fhe]'
from cipherexplain_sdk import (
CipherExplainClient,
extract_spec, # sklearn / PyTorch
from_weights, # any other framework
)
client = CipherExplainClient(api_key="vb_...")
# Register your model (weights only — no data sent)
spec = extract_spec(model, "my_model", feature_names)
client.register(spec)
# Explain any input
result = client.explain_raw("my_model", x_raw)
print(result["shap_values"])
# Key rotation — old key deactivated immediately
new = client.rotate_key()
print(new["new_key"]) # save this
Interactive reference — try every endpoint directly in your browser.
Try every endpoint live. Paste your API key once and run requests directly from the browser.
Clean read-only API reference. Best for sharing with your team or reading offline.
vb_... key into the X-API-Key fieldfhe_mode='ckks' enables full CKKS homomorphic encryption. Your input is encrypted on your machine before transmission. The server evaluates the model and computes SHAP values without decrypting at any point. Results are returned encrypted and decrypted locally by your SDK. Measured at 9.2s end-to-end on Apple M1, single-threaded OpenFHE 1.2, d=50 features, 128-bit security. Supported for logistic_regression models. Async batch workflows recommended.
Built for regulated deployments — credit, insurance, healthcare, hiring. Annual contracts, DPA, SLA, and on-prem available.
FREE — £0 forever
WHAT'S MISSING
Free and Team tiers are for evaluation and non-production workloads. Regulated deployments (banks, insurers, health, hiring) require the Business or Enterprise plans for signed DPA, Art. 13 attestation, SLA, and audit evidence.
Business and Enterprise contracts include committed monthly volume with overage pricing negotiated at signing. Standard rates:
Team plan customers can enable per-call overage (£0.08/SHAP, £1.50/oracle) via POST /account/payg/enable with spend cap. Not recommended for regulated production workloads — use Business or Enterprise instead.
MANAGE YOUR ACCOUNT
Enter your API key to manage billing, cancel, enable PAYG, or check usage — all automated, no emails needed.
Manage subscription: cancel, update card, download invoices · No cancellation fees · Access continues to end of billing period
Three audiences, one suite.
Your customers need a CI/CD validation gate for FHE programs. We have one. License the patent and integrate, or partner on co-development.
Your regulated customers need compliance-grade audit reports and explainable encrypted predictions. CipherExplain provides both as a single suite.
If you deploy AI in credit, insurance, healthcare, or hiring, the EU AI Act requires explanations. If you also handle personal data, GDPR requires encryption. CipherExplain is the only path that satisfies both.
Examples: Banks, insurers, healthcare networks, employment platforms.
Already have a trained linear classifier? Register it in two lines of Python — sklearn, PyTorch, TensorFlow, JAX, or raw weight arrays. Only numbers travel over HTTPS. No training data, no pickle, no arbitrary code execution.
pip install cipherexplain
Both inventions are filed under the Patent Cooperation Treaty (PCT) with priority date April 7, 2026. International search reports expected August 2027. National phase entry deadline October 2028. Coverage spans 150+ countries via PCT.
System and Method for Adversarial Noise-Guided Differential Testing of Fully Homomorphic Encryption Programs
Homomorphic Encrypted Model Explanation: Computing SHAP Values Under FHE
Free tier is instant — verify your email and start in 30 seconds. Paid tiers via Stripe. Enterprise contracts available.
Enterprise licensing, NDA evaluations, and custom model adapter development also available — contact us.