Independent cryptography research — analysing known vulnerabilities, building provable fixes, and publishing everything open access.
Encrypted inference lets a client get a prediction without revealing their input, but if they also publish the SHAP explanation an attacker can often reconstruct the input. The standard fix — adding differential-privacy noise to the explanation — cannot simultaneously deliver conventional DP (ε ≤ 5) and useful signal (SNR ≥ 1) under global or smooth sensitivity. SHAP evaluates the model on K coalition subsets and regresses: sensitivity to the input grows as √K, so Gaussian noise must exceed the explanation itself at every privacy budget tight enough to resist reconstruction. Empirically confirmed via reconstruction attacks on UCI Adult and German Credit.
Every HQC implementation ships the same Reed-Muller encoder. It leaks individual message bits with 96.9% accuracy from a single power trace (Jeon et al., 2026/071). The root cause is algorithmic: the encoder tests each message bit to decide which generator rows to include. Any implementation of that algorithm will leak. PermNet-RM reformulates RM(1,m) encoding as the GF(2) zeta transform of a fixed indicator vector, computed via a fixed-topology butterfly network. Message bits enter as initial register state and are never read again. Zero timing spread across all 256 inputs. Drop-in replacement for reed_muller_encode().
Every version of Clang released since June 2022 silently transforms constant-time post-quantum code into timing-leaky binaries. The responsible component is x86-cmov-converter inside the LLVM x86 backend: it detects the BIT0MASK pattern, decides a conditional jump would be faster, and emits one — branching on a secret key bit. Confirmed across 9 Clang versions, 20 compiler/platform combinations, on Linux and Windows. A single build flag fixes all of them. It also makes the code 3.07× faster, because the "optimisation" was causing millions of branch mispredictions per operation.