Abstract is missing.
- XAI Requirements in Smart Production Processes: A Case StudyDeborah Baum, Kevin Baum 0001, Timo P. Gros, Verena Wolf. 3-24 [doi]
- Perlocution vs Illocution: How Different Interpretations of the Act of Explaining Impact on the Evaluation of Explanations and XAIFrancesco Sovrano, Fabio Vitali. 25-47 [doi]
- Dear XAI Community, We Need to Talk! - Fundamental Misconceptions in Current XAI ResearchTimo Freiesleben, Gunnar König. 48-65 [doi]
- Speeding Things Up. Can Explainability Improve Human Learning?Jakob Mannmeusel, Mario Rothfelder, Samaneh Khoshrou. 66-84 [doi]
- Statutory Professions in AI Governance and Their Consequences for Explainable AILabhaoise NíFhaoláin, Andrew Hines, Vivek Nallur. 85-96 [doi]
- The Xi Method: Unlocking the Mysteries of Regression with StatisticsValentina Ghidini. 97-114 [doi]
- Do Intermediate Feature Coalitions Aid Explainability of Black-Box Models?Minal Suresh Patil, Kary Främling. 115-130 [doi]
- Unfooling SHAP and SAGE: Knockoff Imputation for Shapley ValuesKristin Blesch, Marvin N. Wright, David S. Watson. 131-146 [doi]
- Strategies to Exploit XAI to Improve Classification SystemsAndrea Apicella 0001, Luca Di Lorenzo, Francesco Isgrò, Andrea Pollastro, Roberto Prevete. 147-159 [doi]
- Beyond Prediction Similarity: ShapGAP for Evaluating Faithful Surrogate Models in XAIEttore Mariotti, Adarsa Sivaprasad, Jose Maria Alonso-Moral. 160-173 [doi]
- iPDP: On Partial Dependence Plots in Dynamic Modeling ScenariosMaximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke Hüllermeier. 177-194 [doi]
- SAC-FACT: Soft Actor-Critic Reinforcement Learning for Counterfactual ExplanationsFatima Ezzeddine, Omran Ayoub, Davide Andreoletti, Silvia Giordano. 195-216 [doi]
- Algorithm-Agnostic Feature Attributions for ClusteringChristian A. Scholbeck, Henri Funk, Giuseppe Casalicchio. 217-240 [doi]
- Feature Importance versus Feature Influence and What It Signifies for Explainable AIKary Främling. 241-259 [doi]
- ABC-GAN: Spatially Constrained Counterfactual Generation for Image Classification ExplanationsDimitry Mindlin, Malte Schilling, Philipp Cimiano. 260-282 [doi]
- The Importance of Time in Causal Algorithmic RecourseIsacco Beretta, Martina Cinquini. 283-298 [doi]
- Explaining Model Behavior with Global Causal AnalysisMarcel Robeer, Floris Bex, Ad Feelders, Henry Prakken. 299-323 [doi]
- Counterfactual Explanations for Graph Classification Through the Lenses of DensityCarlo Abrate, Giulia Preti, Francesco Bonchi. 324-348 [doi]
- Ablation Path SaliencyJustus Sagemüller, Olivier Verdier. 349-372 [doi]
- IxDRL: A Novel Explainable Deep Reinforcement Learning Toolkit Based on Analyses of InterestingnessPedro Sequeira, Melinda T. Gervasio. 373-396 [doi]
- The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image ClassifiersMeike Nauta, Christin Seifert. 397-420 [doi]
- Reason to Explain: Interactive Contrastive Explanations (REASONX)Laura State, Salvatore Ruggieri, Franco Turini. 421-437 [doi]
- Sanity Checks for Saliency Methods Explaining Object DetectorsDeepan Chakravarthi Padmanabhan, Paul G. Plöger, Octavio Arriaga, Matias Valdenegro-Toro. 438-455 [doi]
- Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating ProcessChristoph Molnar, Timo Freiesleben, Gunnar König, Julia Herbinger, Tim Reisinger, Giuseppe Casalicchio, Marvin N. Wright, Bernd Bischl. 456-479 [doi]
- Evaluating Feature Relevance XAI in Network Intrusion DetectionJulian Tritscher, Maximilian Wolf, Andreas Hotho, Daniel Schlör. 483-497 [doi]
- Cost of Explainability in AI: An Example with Credit Scoring ModelsJean Dessain, Nora Bentaleb, Fabien Vinas. 498-516 [doi]
- Lorenz Zonoids for Trustworthy AIPaolo Giudici, Emanuela Raffinetti. 517-530 [doi]
- Explainable Machine Learning for Bag of Words-Based Phishing DetectionMaria Carla Calzarossa, Paolo Giudici, Rasha Zieni. 531-543 [doi]
- An Evaluation of Contextual Importance and Utility for Outcome Explanation of Black-Box Predictions for Medical DatasetsAvleen Malhi, Kary Främling. 544-557 [doi]
- Evaluating Explanations of an Alzheimer's Disease 18F-FDG Brain PET Black-Box ClassifierLisa Anita De Santi, Filippo Bargagna, Maria Filomena Santarelli, Vincenzo Positano. 558-581 [doi]
- The Accuracy and Faithfullness of AL-DLIME - Active Learning-Based Deterministic Local Interpretable Model-Agnostic Explanations: A Comparison with LIME and DLIME in MedicineSarah Holm, Luís Macedo. 582-605 [doi]
- Understanding Unsupervised Learning Explanations Using Contextual Importance and UtilityAvleen Malhi, Vlad Apopei, Kary Främling. 606-617 [doi]
- Color Shadows 2: Assessing the Impact of XAI on Diagnostic Decision-MakingChiara Natali, Lorenzo Famiglini, Andrea Campagner, Giovanni Andrea La Maida, Enrico Gallazzi, Federico Cabitza. 618-629 [doi]
- Federated Learning of Explainable Artificial Intelligence Models for Predicting Parkinson's Disease ProgressionJosé Luis Corcuera Bárcena, Pietro Ducange, Francesco Marcelloni, Alessandro Renda, Fabrizio Ruffini. 630-648 [doi]
- An Interactive XAI Interface with Application in Healthcare for Non-expertsJingyu Hu, Yizhu Liang, Weiyu Zhao, Kevin McAreavey, Weiru Liu. 649-670 [doi]
- Selecting Textural Characteristics of Chest X-Rays for Pneumonia Lesions Classification with the Integrated Gradients XAI Attribution MethodOleksandr Davydko, Vladimir Pavlov, Luca Longo. 671-687 [doi]