EXTRA: Explainable and Trustworthy Applications
Motivation
Software systems increasingly operate in critical, data-intensive, and socially embedded contexts. Themes like explainability, trustworthiness, and responsibility are no longer optional, but essential. From Artificial Intelligence (AI)-driven decision-making to immersive digital experiences and socio-technical infrastructures, ensuring that systems behave transparently, securely, and ethically is a major challenge for both academia and industry.
The EXTRA (Explainable and Trustworthy Applications) track provides a dedicated venue for contributions addressing the foundations, design, development, and operation of trustworthy and responsible software systems. We invite submissions presenting research advances, industrial practices, empirical evaluations, and experience reports that enhance the transparency, security, reliability, and ethical compliance of complex software systems.
This track particularly encourages interdisciplinary contributions combining software engineering, artificial intelligence, human-computer interaction, ethics, and system dependability.
We encourage the submission of full papers and short papers that demonstrate the benefits, challenges, and lessons learned in the design, development,
deployment, and evolution of explainable and trustworthy software systems. Submissions may address technical, methodological, or organizational aspects related to the adoption and application of practices, tools, and techniques that foster transparency, fairness, security, and ethical compliance. We particularly welcome papers providing empirical evidence, industrial experience, or case studies illustrating how organizations are integrating explainability and trustworthiness into their software engineering processes and complex system architectures.
Topics
Topics of interest include, but are not restricted to:
- Theories, models, and methods for explainability, transparency, and interpretability in software and AI systems
- Fairness, accountability, and responsible AI in system design and operation
- Software ethics, governance, and regulatory compliance
- Security, privacy, and vulnerability management as enablers of trustworthiness
- Security, safety and reliability in AI-enabled and autonomous systems
- Architectures, tools, and methodologies for building and assessing explainable and trustworthy systems
- Integration of explainability and trust in interactive and immersive environments (e.g., digital twins, metaverse platforms, and collaborative virtual workspaces)
- Trustworthy engineering of intelligent and foundation model–based systems (e.g., LLMs, multimodal architectures, decision-support tools, and AI-enabled agents)
- Human factors, transparency, and governance in software-intensive socio-technical systems
- Applications of AI for enhancing security and trustworthiness (e.g., AI for security requirements engineering, secure design, security risk analysis, coding and testing, documentation generation, threat detection and cyber threat intelligence processing)
- Applications of explainable and trustworthy principles in emerging domains, including digital health, finance, law, sustainability, and critical infrastructures
Track/Session Organizers
- Katja Tuma, k.tuma@tue.nl, Eindhoven University of Technology, The Netherlands
- Giammaria Giordano, giagiordano@unisa.it, University of Salerno, Italy
- Fabio Palomba, fpalomba@unisa.it, University of Salerno, Italy
Program Committee
- Bernhard Berger, Hamburg University of Technology
- Michel Chaudron, Eindhoven University of Technology
- Nicolas Diazferreyra, Hamburg University of Technology
- Jamal Elhachem, Université de Bretagne Sud (UBS)
- Emanuele Iannone, Hamburg University of Technology
- Tong Li, Beijing University of Technology
- Elena Lisova, MDU, VCE
- Luana Martins, University of Salerno
- Alessandra Parziale, Gran Sasso Science Institute
- Viviana Pentangelo, University of Salerno
- Maura Pintor, University of Cagliari
- Gilberto Recupito, University of Salerno
- Mersedeh Sadeghi, University of Cologne
- Viviana Pentangelo, University of Salerno
- Mersedeh Sadeghi, University of Cologne
