Spotlight on open-source blockbusters from the European Trustworthy AI Foundation and beyond

SESSION TIME

15:15 - 16:15 (UTC+1)

Parallel Tracks:
Trustworthy AI Engineering

Join an in-depth session spotlighting key software tools from the international ecosystem, led by the principal maintainers themselves. Gain firsthand insights, best practices, and upcoming features directly from the experts who build and shape these tools.

FOCUS

Pitchs

PARMA-light

PARMA-light, a framework for building AI assessment platforms:
PARMA-light is a light-weight prototype implementation of an AI assessment framework, whose design-features facilitate the management and execution of containerized AI tests in a modular, reproducible, and
automatable fashion while also providing a high degree of built-in auditability.

Daniel Becker, Team Lead AI Assessment Operations | Fraunhofer IAIS

Xplique

Xplique, an Explainability Toolbox for Neural Networks:
Xplique is a Python library offering a comprehensive suite of explainability methods for deep learning models, including attribution methods, feature visualization tools, concept-based techniques, and example-based approaches such as prototypes and counterfactuals. By providing these tools, Xplique enables users to gain insights into model behavior, enhancing transparency and trust in AI systems.

Agustin Martin Picard, AI Research Engineer | IRT Saint Exupéry

DQM-ML

DQM-ML, Data Quality Metrics library : DQM-ML incorporates more than a dozen methods divided into inherent metrics which evaluate intrinsically the data knowledge by assessing the connection between data and its application domain such as representativeness, diversity and completeness, and system-dependent metrics which integrates in the assessment the system behaviors such as coverage and domain gap metrics.

Faouzi Adjed, AI Architect/ Research-engineer / Standardization expert | IRT SystemX

UQModels

UQModels, Remove identified noises and corruptions in data before inference :
UQModels provides ML and Deep learning approaches that perform forecast and anomaly detection with uncertainty quantification to build uncertainty and models indicators that increase the confidence in AI Model predictions.

Kevin Pasini, Doctor in Machine learning | IRT SystemX

Eclipse LMOS

An Open Kernel for the Future of Computation — Agentic Orchestration Loops:
Karyo is an open, auditable kernel for orchestrating agentic loops (like OpenAI’s Responses API). It enables full control over how AI systems reason, act, and use data — critical for building transparent, sovereign AI.

Arun Joseph, Co-founder & CEO | Masaic Agentic Systems