Dataiku Introduces Kiji Inspector, an Open‑Source Explainability Framework for AI Agents

Dataiku has introduced Kiji Inspector, one of the first open-source explainability frameworks purpose-built for enterprise AI agents. The first model family supported by Kiji Inspector is NVIDIA Nemotron open models.

As enterprises move toward sovereign AI and build more of their own AI infrastructure, combining NVIDIA’s Nemotron models with Dataiku’s Kiji Inspector helps ensure organizations maintain clear visibility into how AI-driven decisions are made. Kiji Inspector provides built-in explainability for agent decisions, directly addressing one of the most pressing challenges in enterprise AI: the black-box optioproblem.  At the core of Kiji Inspector™ is a Sparse Autoencoder that looks inside the model at the moment it commits to a tool, identifying the signals behind that choice and translating them into clear explanations teams can understand, trace, validate, and trust — without slowing the system down.

“Enterprises are embedding AI agents into decisions that influence revenue, safety, compliance, and customer trust, yet most still lack structural visibility into how those systems reason,” said Hannes Hapke, Director of 575 Lab at Dataiku. “Without explainability, scaling AI means scaling uncertainty. Bringing Kiji Inspector to NVIDIA Nemotron open models changes that equation. It enables organizations to inspect and refine AI explainability before risk becomes reality. This is essential as agentic systems move from experimentation to trusted infrastructure.”

This release builds on the broader alignment between Dataiku and NVIDIA to deliver production-grade generative and agentic AI. NVIDIA Nemotron open models provide production-grade performance and advanced capabilities required for enterprise AI agentic systems. Dataiku provides scalable orchestration, connecting data platforms, enterprise applications, and AI services within a single, governed framework.

“Scaling autonomous AI agents across the enterprise demands trust rooted in transparency and accountability,” said Amanda Saunders, Director of Generative AI, NVIDIA. “Open models like NVIDIA Nemotron give organizations visibility into how their systems operate, enabling deeper understanding, auditability, and control. By combining Nemotron’s state-of-the-art open-source models with the Kiji Inspector, users can understand what moved the agent’s LLM to make the decision.”

By extending Kiji Inspector to NVIDIA Nemotron, Dataiku enables enterprises to harness NVIDIA’s cutting-edge open-source AI performance without compromising on model performance known from closed-source model APIs. As AI agents become more autonomous and embedded in enterprise systems, explainable reasoning will be foundational to long-term AI success, regulatory readiness, and competitive separation.