Frequently asked questions
We answer your questions about explainable AI, privacy and our platform.
What is explainable AI (XAI)?
A set of techniques that clarify why a model made a decision. It replaces the black box with concrete evidence (attributions, counterfactuals, representative examples).
Interpretability vs explainability?
Interpretability is model‑intrinsic (trees, linear). Explainability uses external methods (SHAP, LIME, Grad‑CAM) to explain complex models such as deep networks.
Does it help with compliance (AI Act, GDPR)?
We provide traceability, decision logs and auditable explanations that support compliance (right to explanation, impact assessments, model governance).
What data is required?
Only the minimum to generate explanations. We support pseudonymization, anonymization and on‑prem deployments to preserve privacy.
Does it integrate with my current models?
Yes, via API. Compatible with common frameworks (TensorFlow, PyTorch, scikit‑learn) across vision, tabular and text use cases.