Abstract is missing.
- Preface [doi]
- Explainable Artificial Intelligence Beyond Feature Attributions: The Validity and Reliability of Feature Selection ExplanationsRaphael Wallsberger, Ricardo Knauer, Stephan Matzka. 1-8 [doi]
- Shapley values and fairnessPaolo Giudici, Parvati Neelakantan. 9-16 [doi]
- Perception and Consideration of the Explainees' Needs for Satisfying ExplanationsMichael Erol Schaffer, Lutz Terfloth, Carsten Schulte 0001, Heike M. Buhl. 17-24 [doi]
- A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencodersArjun Vinayak Chikkankod, Luca Longo. 25-32 [doi]
- Faithful Attention Explainer: Verbalizing Decisions Based on Discriminative FeaturesYao Rong, David Scheerer, Enkelejda Kasneci. 33-40 [doi]
- Use Bag-of-Patterns Approach to Explore Learned Behaviors of Reinforcement LearningGulsum Alicioglu, Bo Sun. 41-48 [doi]
- Generate Explanations for Time-series classification by ChatGPTZhechang Xue, Yiran Huang, Hongnan Ma, Michael Beigl. 49-56 [doi]
- Model agnostic calibration of image classifiersPaolo Giudici, Giulia Vilone. 57-64 [doi]
- Interpreting Black-Box Time Series Classifiers using Parameterised Event PrimitivesEphrem Tibebe Mekonnen, Luca Longo, Pierpaolo Dondio. 65-72 [doi]
- Patch-based Intuitive Multimodal Prototypes Network (PIMPNet) for Alzheimer's Disease classificationLisa Anita De Santi, Jörg Schlötterer, Meike Nauta, Vincenzo Positano, Christin Seifert. 73-80 [doi]
- AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck ModelGabriele Dominici, Pietro Barbiero, Francesco Giannini, Martin Gjoreski, Marc Langheinrich. 81-88 [doi]
- Online Explainable Ensemble of Tree Models Pruning for Time Series ForecastingAmal Saadallah. 89-96 [doi]
- Towards Mechanistic Interpretability for Autoencoder compression of EEG signalsLeon Hegedic, Luka Hobor, Nikola Maric, Martin Ante Rogosic, Mario Brcic. 97-104 [doi]
- Integrating XAI for Predictive Conflict AnalyticsLuca Macis, Marco Tagliapietra, Alessandro Castelnovo, Daniele Regoli, Greta Greco, Andrea Claudio Cosentini, Paola Pisano, Edoardo Carroccetto. 105-112 [doi]
- Interpretable Vital Sign Forecasting with Model Agnostic Attention MapsYuwei Liu, Chen Dan 0005, Anubhav Bhatti, Bingjie Shen, Divij Gupta, Suraj Parmar, San Lee. 113-120 [doi]
- Towards Assurance of LLM Adversarial Robustness using Ontology-Driven ArgumentationTomas Bueno Momcilovic, Beat Buesser, Giulio Zizzo, Mark Purcell, Dian Balta. 121-128 [doi]
- Looking for the Right Paths to Use XAI in the Judiciary. Which Branches of Law Need Inherently Interpretable Machine Learning Models and Why?Andrzej Porebski. 129-136 [doi]
- The Dynamics of Explainability: Diverse Insights from SHAP Explanations using NeighbourhoodsUrja Pawar, Ruairi O'Reilly, Christian Beder, Donna O'Shea. 137-144 [doi]
- Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superletsVladimir Marochko, Luca Longo. 145-152 [doi]
- Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey AnalysisEduard Barbu, Marharyta Domnich, Raul Vicente, Nikos Sakkas. 153-160 [doi]
- Investigating Neuron Ablation in Attention Heads: The Case for Peak Activation CenteringNicholas Pochinkov, Ben Pasero, Skylar Shibayama. 161-168 [doi]
- CatBoost model with self-explanatory capabilities for predicting SLE in OMAN populationHamza Zidoum, Ali AlShareedah, Aliya Al-Ansari, Batool Al-Lawati, Sumaya Al-Sawafi. 169-176 [doi]
- Channel Modeling for Millimeter-Wave UAV Communication based on Explainable Generative Neural NetworkLadan Gholami, Pietro Ducange, Pietro Cassarà, Alberto Gotta. 177-184 [doi]
- Validation of ML Models from the Field of XAI for Computer Vision in Autonomous DrivingAntonio Mastroianni, Sibylle D. Sager-Müller. 185-192 [doi]
- Second Glance: A Novel Explainable AI to Understand Feature Interactions in Neural Networks using Higher-Order Partial DerivativesZohaib Shahid, Yogachandran Rahulamathavan, Safak Dogan. 193-200 [doi]
- Mediating Explainer for Human Autonomy TeamingSiri Padmanabhan Poti, Christopher J. Stanton. 201-208 [doi]
- Exploring Agent Behaviors in Network Security through Trajectory ClusteringOndrej Lukás, Sebastian Garcia. 209-216 [doi]
- A geometric XAI approach to protein pocket detectionGiovanni Bocchi, Patrizio Frosini, Alessandra Micheletti, Alessandro Pedretti, Gianluca Palermo, Davide Gadioli, Carmen Gratteri, Filippo Lunghini, Andrea Rosario Beccari, Anna Fava, Carmine Talarico. 217-224 [doi]
- An Empirical Investigation of Users' Assessment of XAI Explanations: Identifying the Sweet Spot of Explanation Complexity and ValueFelix Liedeker, Christoph Düsing, Marcel Nieveler, Philipp Cimiano. 225-232 [doi]
- A Two-Stage Algorithm for Cost-Efficient Multi-instance Counterfactual ExplanationsAndré Artelt, Andreas Gregoriades. 233-240 [doi]
- Interactive xAI-dashboard for Semantic SegmentationFinn Schürmann, Sibylle D. Sager-Müller. 241-248 [doi]
- XAI for Group-AI Interaction: Towards Collaborative and Inclusive ExplanationsMohammad Naiseh, Catherine Webb, Timothy J. Underwood, Gopal Ramchurn, Zoë Walters, Navamayooran Thavanesan, Ganesh Vigneswaran. 249-256 [doi]
- Unraveling Anomalies: Explaining Outliers with DTORRiccardo Crupi, Daniele Regoli, Alessandro Damiano Sabatino, Immacolata Marano, Massimiliano Brinis, Luca Albertazzi, Andrea Cirillo, Andrea Claudio Cosentini. 257-264 [doi]
- CaBRNet, An Open-Source Library For Developing And Evaluating Case-Based Reasoning ModelsRomain Xu-Darme, Aymeric Varasse, Alban Grastien, Julien Girard-Satabin, Zakaria Chihani. 265-272 [doi]
- XAgent: A Conversational XAI Agent Harnessing the Power of Large Language ModelsVan Bach Nguyen, Jörg Schlötterer, Christin Seifert. 273-280 [doi]
- mlr3summary: Concise and interpretable summaries for machine learning modelsSusanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann. 281-288 [doi]
- Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq ToolkitGabriele Sarti, Nils Feldhus, Jirui Qi, Malvina Nissim, Arianna Bisazza. 289-296 [doi]
- Human-in-the-loop testing of the explainability of robot navigation algorithms in extended realityJérôme Guzzi, Alessandro Giusti. 297-304 [doi]
- Rulex Platform: leveraging domain knowledge and data-driven rules to support decisions in the fintech sector through eXplainable AI modelsClaudio Muselli, Damiano Verda, Enrico Ferrari, Claire Thomas Gaggiotti, Marco Muselli. 305-312 [doi]
- Building Personalised XAI Experiences Through iSee: a Case-Based Reasoning-Driven PlatformMarta Caro-Martínez, Anne Liret, Belén Díaz-Agudo, Juan A. Recio-García, Jesus M. Darias, Nirmalie Wiratunga, Anjana Wijekoon, Kyle Martin, Ikechukwu Nkisi-Orji, David Corsar, Chamath Palihawadana, Craig Pirie, Derek G. Bridge, Preeja Pradeep, Bruno Fleisch. 313-320 [doi]
- Fostering Human-AI interaction: development of a Clinical Decision Support System enhanced by eXplainable AI and Natural Language ProcessingLaura Bergomi. 321-328 [doi]
- Optimizing Synthetic Data from Scarcity: Towards Meaningful Data Generation in High-Dimensional Low-Sample Size DomainsDanilo Danese. 329-336 [doi]
- Assesing the Interpretability of the Statistical Radiomic Features via Image Saliency Maps in Medical Image Classification TasksOleksandr Davydko. 337-344 [doi]
- Explainable AI as a Crucial Factor for Improving Human-AI Decision-Making ProcessesRegina De Brito Duarte. 345-352 [doi]
- Counterfactual generating Variational Autoencoder for Anomaly DetectionRenate Ernst. 353-360 [doi]
- Privacy Implications of Explainable AI in Data-Driven SystemsFatima Ezzeddine. 361-368 [doi]
- XAI-driven Model Improvements in Interpretable Image SegmentationRokas Gipiskis. 369-376 [doi]
- Design Guidelines for XAI in the Healthcare DomainIris Heerlien. 377-384 [doi]
- Explainable MLOps: A Methodological Framework for the Development of Explainable AI in PracticeAnnemarie Jutte. 385-392 [doi]
- A Novel Model-Agnostic xAI Method Guided by Cost-Sensitive Tree Models and Argumentative Decision GraphsMarija Kopanja. 393-400 [doi]
- Explainable Artificial Intelligence and Reasoning in the Context of Large Neural Network ModelsStefanie Krause. 401-408 [doi]
- Artificial Representative Trees as Interpretable Surrogates for Random ForestsLea Louisa Kronziel. 409-416 [doi]
- Can Reduction of Bias Decrease the Need for Explainability? Working with Simplified Models to Understand ComplexityPedro M. Marques. 417-424 [doi]
- Towards XAI for Optimal TransportPhilip Naumann. 425-432 [doi]
- Knowledge Graphs and Explanations for Improving Detection of Diseases in Images of GrainsLenka Tetková. 433-440 [doi]
- Topological Data Analysis for Trustworthy AIVictor Toscano-Durán. 441-448 [doi]
- Explainable Deep Reinforcement Learning through Introspective ExplanationsNils Wenninghoff. 449-456 [doi]
- Explainable and Debiased Misogyny Identification In Code-Mixed Hinglish using Artificial Intelligence ModelsSargam Yadav. 457-464 [doi]