Abstract is missing.
- Probabilistic Dataset Reconstruction from Interpretable ModelsJulien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala 0002. 1-17 [doi]
- Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy RiskZhangheng Li, Junyuan Hong, Bo Li, Zhangyang Wang. 18-32 [doi]
- Improved Differentially Private Regression via Gradient BoostingShuai Tang, Sergül Aydöre, Michael Kearns, Saeyoung Rho, Aaron Roth 0001, Yichen Wang, Yu-Xiang Wang 0003, Zhiwei Steven Wu. 33-56 [doi]
- SoK: A Review of Differentially Private Linear Models For High-Dimensional DataAmol Khanna, Edward Raff, Nathan Inkawhich. 57-77 [doi]
- Concentrated Differential Privacy for BanditsAchraf Azize, Debabrota Basu. 78-109 [doi]
- PILLAR: How to make semi-private learning more effectiveFrancesco Pinto, Yaxi Hu, Fanny Yang, Amartya Sanyal. 110-139 [doi]
- Fair Federated Learning via Bounded Group LossShengyuan Hu 0001, Zhiwei Steven Wu, Virginia Smith. 140-160 [doi]
- Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected FeaturesHadi Elzayn, Emily Black, Patrick Vossler, Nathanael Jo, Jacob Goldin, Daniel E. Ho. 161-193 [doi]
- Evaluating Superhuman Models with Consistency ChecksLukas Fluri, Daniel Paleka, Florian Tramèr. 194-232 [doi]
- Certifiably Robust Reinforcement Learning through Model-Based Abstract InterpretationChenxi Yang, Greg Anderson 0003, Swarat Chaudhuri. 233-251 [doi]
- Fast Certification of Vision-Language Models Using Incremental Randomized SmoothingAshutosh Nirala, Ameya Joshi, Soumik Sarkar, Chinmay Hegde. 252-271 [doi]
- Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIPRuinan Jin, Chun-Yin Huang, Chenyu You, Xiaoxiao Li. 272-285 [doi]
- REStore: Exploring a Black-Box Defense against DNN Backdoors using Rare Event SimulationQuentin Le Roux, Kassem Kallas, Teddy Furon. 286-308 [doi]
- EdgePruner: Poisoned Edge Pruning in Graph Contrastive LearningHiroya Kato, Kento Hasegawa, Seira Hidano, Kazuhide Fukushima. 309-326 [doi]
- Indiscriminate Data Poisoning Attacks on Pre-trained Feature ExtractorsYiwei Lu, Matthew Y. R. Yang, Gautam Kamath 0001, Yaoliang Yu. 327-343 [doi]
- ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networksEleanor Clifford, Ilia Shumailov, Yiren Zhao, Ross J. Anderson, Robert D. Mullins. 344-357 [doi]
- The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion ModelsHadi M. Dolatabadi, Sarah M. Erfani, Christopher Leckie. 358-386 [doi]
- SoK: Pitfalls in Evaluating Black-Box AttacksFnu Suya, Anshuman Suri, Tingwei Zhang, Jingtao Hong, Yuan Tian 0001, David Evans 0001. 387-407 [doi]
- Evading Black-box Classifiers Without Breaking EggsEdoardo Debenedetti, Nicholas Carlini, Florian Tramèr. 408-424 [doi]
- Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on Segmentation ModelsFrancesco Croce, Matthias Hein 0001. 425-442 [doi]
- Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMMChulin Xie, Pin-Yu Chen, Qinbin Li, Arash Nourian, Ce Zhang 0001, Bo Li. 443-471 [doi]
- Differentially Private Multi-Site Treatment Effect EstimationTatsuki Koga, Kamalika Chaudhuri, David Page. 472-489 [doi]
- ScionFL: Efficient and Robust Secure Quantized AggregationYaniv Ben-Itzhak, Helen Möllering, Benny Pinkas, Thomas Schneider 0003, Ajith Suresh, Oleksandr Tkachenko, Shay Vargaftik, Christian Weinert, Hossein Yalame, Avishay Yanai. 490-511 [doi]
- Differentially Private Heavy Hitter Detection using Federated AnalyticsKaran N. Chadha, Junye Chen, John C. Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar. 512-533 [doi]
- Olympia: A Simulation Framework for Evaluating the Concrete Scalability of Secure Aggregation ProtocolsIvoline C. Ngong, Nicholas Gibson, Joseph P. Near. 534-551 [doi]
- Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image EncodersAndrew Geng, Pin-Yu Chen. 552-568 [doi]
- Data Redaction from Conditional Generative ModelsZhifeng Kong, Kamalika Chaudhuri. 569-591 [doi]
- Towards Scalable and Robust Model VersioningWenxin Ding, Arjun Nitin Bhagoji, Ben Y. Zhao, Haitao Zheng 0001. 592-611 [doi]
- AI auditing: The Broken Bus on the Road to AI AccountabilityAbeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa Deborah Raji. 612-643 [doi]
- Under manipulations, are some AI models harder to audit?Augustin Godinot, Erwan Le Merrer, Gilles Trédan, Camilla Penzo, François Taïani. 644-664 [doi]
- Unifying Corroborative and Contributive Attributions in Large Language ModelsTheodora Worledge, Judy Hanwen Shen, Nicole Meister, Caleb Winston, Carlos Guestrin. 665-683 [doi]
- CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language ModelsHossein Hajipour, Keno Hassler, Thorsten Holz, Lea Schönherr, Mario Fritz. 684-709 [doi]
- Navigating the Structured What-If Spaces: Counterfactual Generation via Structured DiffusionNishtha Madaan, Srikanta Bedathur. 710-722 [doi]
- Understanding, Uncovering, and Mitigating the Causes of Inference Slowdown for Language ModelsKamala Varma, Arda Numanoglu, Yigitcan Kaya, Tudor Dumitras. 723-740 [doi]