Abstract is missing.
- Explainable Global Fairness Verification of Tree-Based ClassifiersStefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Federico Marcuzzi. 1-17 [doi]
- Exploiting Fairness to Enhance Sensitive Attributes ReconstructionJulien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala 0002. 18-41 [doi]
- Wealth Dynamics Over Generations: Analysis and InterventionsKrishna Acharya, Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth 0001, Juba Ziani. 42-57 [doi]
- Learning Fair Representations through Uniformly Distributed Sensitive AttributesPatrik Joslin Kenfack, Adín Ramírez Rivera, Adil Mehmood Khan, Manuel Mazzara. 58-67 [doi]
- Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning?Guy Heller, Ethan Fetaya. 68-106 [doi]
- Kernel Normalized Convolutional Networks for Privacy-Preserving Machine LearningReza Nasirigerdeh, Javad Torkzadehmahani, Daniel Rueckert, Georgios Kaissis. 107-118 [doi]
- Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate VulnerabilitySayanton V. Dibbo, Dae Lim Chung, Shagufta Mehnaz. 119-135 [doi]
- Distribution Inference Risks: Identifying and Mitigating Sources of LeakageValentin Hartmann, Léo Meynent, Maxime Peyrard, Dimitrios Dimitriadis, Shruti Tople, Robert West 0001. 136-149 [doi]
- Dissecting Distribution InferenceAnshuman Suri, Yifu Lu, Yanjin Chen, David Evans 0001. 150-164 [doi]
- ExPLoit: Extracting Private Labels in Split LearningSanjay Kariyappa, Moinuddin K. Qureshi. 165-175 [doi]
- SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative LearningHarsh Chaudhari, Matthew Jagielski, Alina Oprea. 176-196 [doi]
- Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model ReprogrammingHuzaifa Arif, Alex Gittens, Pin-Yu Chen. 197-209 [doi]
- Optimal Data Acquisition with Privacy-Aware AgentsRachel Cummings, Hadi Elzayn, Emmanouil Pountourakis, Vasilis Gkatzelis, Juba Ziani. 210-224 [doi]
- A Light Recipe to Train Robust Vision TransformersEdoardo Debenedetti, Vikash Sehwag, Prateek Mittal. 225-253 [doi]
- Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label AttacksWashington Garcia, Pin-Yu Chen, Hamilton Scott Clouse, Somesh Jha, Kevin R. B. Butler. 254-270 [doi]
- Publishing Efficient On-device Models Increases Adversarial VulnerabilitySanghyun Hong 0001, Nicholas Carlini, Alexey Kurakin. 271-290 [doi]
- EDoG: Adversarial Edge Detection For Graph Neural NetworksXiaojun Xu, Hanzhang Wang, Alok Lal, Carl A. Gunter, Bo Li 0026. 291-305 [doi]
- Counterfactual Sentence Generation with Plug-and-Play PerturbationNishtha Madaan, Diptikalyan Saha, Srikanta Bedathur. 306-315 [doi]
- Rethinking the Entropy of Instance in Adversarial TrainingMinseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang. 316-326 [doi]
- Towards Transferable Unrestricted Adversarial Examples with Minimum ChangesFangcheng Liu, Chao Zhang 0001, Hongyang Zhang 0001. 327-338 [doi]
- "Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and PracticeGiovanni Apruzzese, Hyrum S. Anderson, Savino Dambra, David Freeman, Fabio Pierazzi, Kevin A. Roundy. 339-364 [doi]
- What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabelYao Qin 0001, Xuezhi Wang 0002, Balaji Lakshminarayanan, Ed H. Chi, Alex Beutel. 365-376 [doi]
- Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated LearningGorka Abad, Servio Paguada, Oguzhan Ersoy, Stjepan Picek, Víctor Julio Ramírez-Durán, Aitor Urbieta. 377-391 [doi]
- Backdoor Attacks on Time Series: A Generative ApproachYujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey 0001. 392-403 [doi]
- Venomave: Targeted Poisoning Against Speech RecognitionHojjat Aghakhani, Lea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel, Giovanni Vigna. 404-417 [doi]
- Endogenous Macrodynamics in Algorithmic RecoursePatrick Altmeyer, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, Cynthia C. S. Liem. 418-431 [doi]
- ModelPred: A Framework for Predicting Trained Model from Training DataYingyan Zeng, Jiachen T. Wang, Si Chen, Hoang Anh Just, Ran Jin, Ruoxi Jia 0001. 432-449 [doi]
- Harnessing Prior Knowledge for Explainable Machine Learning: An OverviewKatharina Beckh, Sebastian Müller, Matthias Jakobs, Vanessa Toborek, Hanxiao Tan, Raphael Fischer 0001, Pascal Welke, Sebastian Houben, Laura von Rüden. 450-463 [doi]
- Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural NetworksTilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell. 464-483 [doi]
- Reducing Certified Regression to Certified Classification for General Poisoning AttacksZayd Hammoudeh, Daniel Lowd. 484-523 [doi]
- Neural Lower Bounds for VerificationFlorian Jaeckle, M. Pawan Kumar. 524-536 [doi]
- Toward Certified Robustness Against Real-World Distribution ShiftsHaoze Wu 0001, Teruhiro Tagomori, Alexander Robey, Fengjun Yang, Nikolai Matni, George J. Pappas, Hamed Hassani, Corina S. Pasareanu, Clark W. Barrett. 537-553 [doi]
- CARE: Certifiably Robust Learning with Reasoning via Variational InferenceJiawei Zhang, Linyi Li, Ce Zhang 0001, Bo Li 0026. 554-574 [doi]
- FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNsMintong Kang, Linyi Li, Bo Li 0026. 575-592 [doi]
- PolyKervNets: Activation-free Neural Networks For Efficient Private InferenceToluwani Aremu, Karthik Nandakumar. 593-604 [doi]
- Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational DefensesAri Karchmer. 605-621 [doi]
- No Matter How You Slice It: Machine Unlearning with SISA Comes at the Expense of Minority ClassesKorbinian Koch, Marcus Soll. 622-637 [doi]
- Data Redaction from Pre-trained GANsZhifeng Kong, Kamalika Chaudhuri. 638-677 [doi]
- Tensions Between the Proxies of Human Values in AITeresa Datta, Daniel Nissani, Max Cembalest, Akash Khanna, Haley Massa, John Dickerson 0001. 678-689 [doi]
- A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making AlgorithmsAmanda Coston, Anna Kawakami, Haiyi Zhu, Ken Holstein, Hoda Heidari. 690-704 [doi]