Abstract is missing.
- Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class FeaturesIgor Cherepanov, David Sessler, Alex Ulmer, Hendrik Lücke-Tieke, Jörn Kohlhammer. 3-23 [doi]
- Natural Example-Based Explainability: A SurveyAntonin Poché, Lucas Hervier, Mohamed Chafik Bakkay. 24-47 [doi]
- Explainable Artificial Intelligence in Education: A Comprehensive ReviewBlerta Abazi Chaushi, Besnik Selimi, Agron Chaushi, Marika Apostolova. 48-71 [doi]
- Contrastive Visual Explanations for Reinforcement Learning via Counterfactual RewardsXiaowei Liu, Kevin McAreavey, Weiru Liu. 72-87 [doi]
- Compare-xAI: Toward Unifying Functional Testing Methods for Post-hoc XAI Algorithms into a Multi-dimensional BenchmarkMohamed Karim Belaid, Richard Bornemann, Maximilian Rabus, Ralf Krestel, Eyke Hüllermeier. 88-109 [doi]
- Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in SenegalLaura State, Hadrien Salat, Stefania Rubrichi, Zbigniew Smoreda. 110-125 [doi]
- A Novel Architecture for Robust Explainable AI Approaches in Critical Object Detection Scenarios Based on Bayesian Neural NetworksDaniel Gierse, Felix Neubürger, Thomas Kopinski. 126-147 [doi]
- Explaining Black-Boxes in Federated LearningLuca Corbucci, Riccardo Guidotti, Anna Monreale. 151-163 [doi]
- PERFEX: Classifier Performance Explanations for Trustworthy AI SystemsErwin Walraven, Ajaya Adhikari, Cor J. Veenman. 164-180 [doi]
- The Duet of Representations and How Explanations Exacerbate ItCharles Wan, Rodrigo Belo, Leid Zejnilovic, Susana Lavado. 181-197 [doi]
- Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social MediaThales Bertaglia, Stefan Huber, Catalina Goanta, Gerasimos Spanakis, Adriana Iamnitchi. 198-213 [doi]
- Human-Computer Interaction and Explainability: Intersection and TerminologyArthur Picard, Yazan Mualla, Franck Gechter, Stéphane Galland. 214-236 [doi]
- Explaining Deep Reinforcement Learning-Based Methods for Control of Building HVAC SystemsJavier Jiménez Raboso, Antonio Manjavacas, Alejandro Campoy-Nieves, Miguel Molina-Solana, Juan Gómez-Romero. 237-255 [doi]
- Handling Missing Values in Local Post-hoc ExplainabilityMartina Cinquini, Fosca Giannotti, Riccardo Guidotti, Andrea Mattei. 256-278 [doi]
- Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and Without Interacting CriteriaChristophe Labreuche, Roman Bresson. 279-302 [doi]
- XInsight: Revealing Model Insights for GNNs with Flow-Based ExplanationsEli Laird, Ayesh Madushanka, Elfi Kraka, Corey Clark. 303-320 [doi]
- What Will Make Misinformation Spread: An XAI PerspectiveHongbo Bo, Yiwen Wu, Zinuo You, Ryan McConville, Jun Hong 0001, Weiru Liu. 321-337 [doi]
- MEGAN: Multi-explanation Graph Attention NetworkJonas Teufel, Luca Torresi, Patrick Reiser, Pascal Friederich. 338-360 [doi]
- Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability StudiesJonas Teufel, Luca Torresi, Pascal Friederich. 361-381 [doi]
- Evaluating Link Prediction Explanations for Graph Neural NetworksClaudio Borile, Alan Perotti, André Panisson. 382-401 [doi]
- Propaganda Detection Robustness Through Adversarial Attacks Driven by eXplainable AIDanilo Cavaliere, Mariacristina Gallo, Claudio Stanzione. 405-419 [doi]
- Explainable Automated Anomaly Recognition in Failure Analysis: is Deep Learning Doing it Correctly?Leonardo Arrighi, Sylvio Barbon Junior, Felice Andrea Pellegrino, Michele Simonato, Marco Zullich. 420-432 [doi]
- DExT: Detector Explanation ToolkitDeepan Chakravarthi Padmanabhan, Paul G. Plöger, Octavio Arriaga, Matias Valdenegro-Toro. 433-456 [doi]
- Unveiling Black-Boxes: Explainable Deep Learning Models for Patent ClassificationMd Shajalal, Sebastian Denef, Md. Rezaul Karim 0001, Alexander Boden, Gunnar Stevens. 457-474 [doi]
- HOLMES: HOLonym-MEronym Based Semantic Inspection for Convolutional Image ClassifiersFrancesco Dibitonto, Fabio Garcea, André Panisson, Alan Perotti, Lia Morra. 475-498 [doi]
- Evaluating the Stability of Semantic Concept Representations in CNNs for Robust ExplainabilityGeorgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade. 499-524 [doi]
- Beyond One-Hot-Encoding: Injecting Semantics to Drive Image ClassifiersAlan Perotti, Simone Bertolotto, Eliana Pastor, André Panisson. 525-548 [doi]
- Finding Spurious Correlations with Function-Semantic Contrast AnalysisKirill Bykov, Laura Kopf, Marina M.-C. Höhne. 549-572 [doi]
- Explaining Search Result Stances to Opinionated PeopleZhangyi Wu, Tim Draws, Federico Cau, Francesco Barile, Alisa Rieger, Nava Tintarev. 573-596 [doi]
- A Co-design Study for Multi-stakeholder Job Recommender System ExplanationsRoan Schellingerhout, Francesco Barile, Nava Tintarev. 597-620 [doi]
- Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) PandemicClara Punzi, Aleksandra Maslennikova, Gizem Gezici, Roberto Pellungrini, Fosca Giannotti. 621-635 [doi]
- Semantic Meaningfulness: Evaluating Counterfactual Approaches for Real-World Plausibility and FeasibilityJacqueline Höllig, Aniek F. Markus, Jef de Slegte, Prachi Bagave. 636-659 [doi]