Abstract is missing.
- Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language TasksMohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger. 3-25 [doi]
- Evaluating Self-attention Interpretability Through Human-Grounded Experimental ProtocolMilan Bhan, Nina Achache, Victor Legrand, Annabelle Blangero, Nicolas Chesneau. 26-46 [doi]
- Understanding Interpretability: Explainable AI Approaches for Hate Speech ClassifiersSargam Yadav, Abhishek Kaushik 0002, Kevin McDaid. 47-70 [doi]
- From Black Boxes to Conversations: Incorporating XAI in a Conversational AgentVan Bach Nguyen, Jörg Schlötterer, Christin Seifert. 71-96 [doi]
- Toward Inclusive Online Environments: Counterfactual-Inspired XAI for Detecting and Interpreting Hateful and Offensive TweetsMuhammad Deedahwar Mazhar Qureshi, M. Atif Qureshi, Wael Rashwan. 97-119 [doi]
- Causal-Based Spatio-Temporal Graph Neural Networks for Industrial Internet of Things Multivariate Time Series ForecastingAmir Miraki, Austeja Dapkute, Vytautas Siozinys, Martynas Jonaitis, Reza Arghandeh. 120-130 [doi]
- Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person IdentificationCarlos Gómez-Tapia, Bojan Bozic, Luca Longo. 131-152 [doi]
- State Graph Based Explanation Approach for Black-Box Time Series ModelYiran Huang, Chaofan Li, Hansen Lu, Till Riedel, Michael Beigl. 153-164 [doi]
- A Deep Dive into Perturbations as Evaluation Technique for Time Series XAIUdo Schlegel, Daniel A. Keim. 165-180 [doi]
- Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AIIvania Donoso-Guzmán, Jeroen Ooge, Denis Parra, Katrien Verbert. 183-204 [doi]
- Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI MethodsGiulia Vilone, Luca Longo. 205-232 [doi]
- Concept Distillation in Graph Neural NetworksLucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja Jamnik, Pietro Liò. 233-255 [doi]
- Adding Why to What? Analyses of an Everyday ExplanationLutz Terfloth, Michael Schaffer, Heike M. Buhl, Carsten Schulte. 256-279 [doi]
- For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAIUlrike Kuhl, André Artelt, Barbara Hammer. 280-300 [doi]
- The Importance of Distrust in AITobias M. Peters, Roel W. Visser. 301-317 [doi]
- Weighted Mutual Information for Out-Of-Distribution DetectionGiacomo De Bernardi, Sara Narteni, Enrico Cambiaso, Marco Muselli, Maurizio Mongelli. 318-331 [doi]
- Leveraging Group Contrastive Explanations for Handling FairnessAlessandro Castelnovo, Nicole Inverardi, Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica, Andrea Seveso. 332-345 [doi]
- LUCID-GAN: Conditional Generative Models to Locate UnfairnessAndres Algaba, Carmen Mazijn, Carina Prunkl, Jan Danckaert, Vincent Ginis. 346-367 [doi]
- Explainable Machine Learning via ArgumentationNicoletta Prentzas, Constantinos S. Pattichis, Antonis C. Kakas. 371-398 [doi]
- A Novel Structured Argumentation Framework for Improved Explainability of Classification TasksLucas Rizzo. 399-414 [doi]
- Hardness of Deceptive Certificate SelectionStephan Wäldchen. 415-427 [doi]
- Integrating GPT-Technologies with Decision Models for ExplainabilityAlexandre Goossens, Jan Vanthienen. 428-448 [doi]
- Outcome-Guided Counterfactuals from a Jointly Trained Generative Latent SpaceEric Yeh, Pedro Sequeira, Jesse Hostetler, Melinda T. Gervasio. 449-469 [doi]
- An Exploration of the Latent Space of a Convolutional Variational Autoencoder for the Generation of Musical Instrument TonesAnastasia Natsiou, Seán O'Leary, Luca Longo. 470-486 [doi]
- Improving Local Fidelity of LIME by CVAEDaisuke Yasui, Hiroshi Sato, Masao Kubo. 487-511 [doi]
- Scalable Concept Extraction in Industry 4.0Andres Felipe Posada-Moreno, Kai Müller, Florian Brillowski, Friedrich Solowjow, Thomas Gries, Sebastian Trimpe. 512-535 [doi]