Abstract is missing.
- Non-Halting Queries: Exploiting Fixed Points in LLMsGhaith Hammouri, Kemal Derya, Berk Sunar. 1-22 [doi]
- Jailbreaking Black Box Large Language Models in Twenty QueriesPatrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, Eric Wong 0001. 23-42 [doi]
- Get My Drift? Catching LLM Task Drift with Activation DeltasSahar Abdelnabi, Aideen Fay, Giovanni Cherubin, Ahmed Salem 0001, Mario Fritz, Andrew Paverd. 43-67 [doi]
- MARKMyWORDS: Analyzing and Evaluating Language Model WatermarksJulien Piet, Chawin Sitawarin, Vivian Fang, Norman Mu, David A. Wagner. 68-91 [doi]
- SnatchML: Hijacking ML Models Without Training AccessMahmoud Ghorbel, Halima Bouzidi, Ioan Marius Bilasco, Ihsen Alouani. 92-109 [doi]
- TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time Series Forecasting ModelsCaspar Meijer, Jiyue Huang, Shreshtha Sharma, Elena Lazovik, Lydia Y. Chen. 110-124 [doi]
- PEEL the Layers and Find Yourself: Revisiting Inference-Time Data Leakage for Residual Neural NetworksHuzaifa Arif, Keerthiram Murugesan, Payel Das, Alex Gittens, Pin-Yu Chen. 125-149 [doi]
- Attackers Can Do Better: Over- and Understated Factors of Model Stealing AttacksDaryna Oliynyk, Rudolf Mayer, Andreas Rauber. 150-168 [doi]
- Backdoor Detection Through Replicated Execution of Outsourced TrainingHengrui Jia, Sierra Calanda Wyllie, Akram Bin Sediq, Ahmed Ibrahim, Nicolas Papernot. 169-188 [doi]
- Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor AttacksEbtisaam Alharbi, Leandro Soriano Marcolino, Qiang Ni, Antonios Gouglidis. 189-202 [doi]
- Krait: A Backdoor Attack Against Graph Prompt TuningYing Song, Rita Singh, Balaji Palanisamy. 203-221 [doi]
- The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style AttributesWencong You, Daniel Lowd. 222-246 [doi]
- SoK: On the Offensive Potential of AISaskia Laura Schröer, Giovanni Apruzzese, Soheil Human, Pavel Laskov, Hyrum S. Anderson, Edward W. N. Bernroider, Aurore Fass, Ben Nassi, Vera Rimmer, Fabio Roli, Samer Salam, Chi En Ashley Shen, Ali Sunyaev, Tim Wadhwa-Brown, Isabel Wagner, Gang Wang 0011. 247-280 [doi]
- Position: Contextual Confidence and Generative AIShrey Jain, Zoë Hitzig, Pamela Mishkin. 281-301 [doi]
- Locking Machine Learning Models into HardwareEleanor Clifford, Adhithya Saravanan, Harry Langford, Cheng Zhang, Yiren Zhao, Robert D. Mullins, Ilia Shumailov, Jamie Hayes. 302-320 [doi]
- Episodic Memory in AI Agents Poses Risks that Should be Studied and MitigatedChad DeChant. 321-332 [doi]
- Position: Membership Inference Attacks Cannot Prove That a Model was Trained on Your DataJie Zhang, Debeshee Das, Gautam Kamath 0001, Florian Tramèr. 333-345 [doi]
- Range Membership Inference AttacksJiashu Tao, Reza Shokri. 346-361 [doi]
- Hyperparameters in Score-Based Membership Inference AttacksGauri Pradhan, Joonas Jälkö, Marlon Tobaben, Antti Honkela. 362-384 [doi]
- SoK: Membership Inference Attacks on LLMs are Rushing Nowhere (and How to Fix It)Matthieu Meeus, Igor Shilov, Shubham Jain, Manuel Faysse, Marek Rei, Yves-Alexandre de Montjoye. 385-401 [doi]
- HALO: Robust Out-of-Distribution Detection via Joint OptimisationHugo Lyons Keenan, Sarah M. Erfani, Christopher Leckie. 402-426 [doi]
- Targeted Manifold Manipulation Against Adversarial AttacksBanibrata Ghosh, Haripriya Harikumar, Svetha Venkatesh, Santu Rana. 427-438 [doi]
- SEA: Shareable and Explainable Attribution for Query-Based Black-Box AttacksYue Gao 0011, Ilia Shumailov, Kassem Fawaz. 439-458 [doi]
- SpaNN: Detecting Multiple Adversarial Patches on CNNs by Spanning Saliency ThresholdsMauricio Byrd Victorica, György Dán, Henrik Sandberg. 459-478 [doi]
- Verifiable and Provably Secure Machine UnlearningThorsten Eisenhofer, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh, Olga Ohrimenko, Nicolas Papernot. 479-496 [doi]
- Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of PrivacyJamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, Nicolas Papernot. 497-519 [doi]
- Position: LLM Unlearning Benchmarks are Weak Measures of ProgressPratiksha Thaker, Shengyuan Hu 0001, Neil Kale, Yash Maurya, Zhiwei Steven Wu, Virginia Smith. 520-533 [doi]
- On the Reliability of Membership Inference AttacksAmrita Roy Chowdhury 0001, Zhifeng Kong, Kamalika Chaudhuri. 534-549 [doi]
- Equilibria of Data Marketplaces with Privacy-Aware Sellers under Endogenous Privacy CostsDiptangshu Sen, Jingyan Wang 0001, Juba Ziani. 550-574 [doi]
- Streaming Private Continual Counting via BinningJoel Daniel Andersson, Rasmus Pagh. 575-589 [doi]
- Correlated Privacy Mechanisms for Differentially Private Distributed Mean EstimationSajani Vithana, Viveck R. Cadambe, Flávio P. Calmon, Haewon Jeong. 590-614 [doi]
- Private Selection with Heterogeneous SensitivitiesDaniela Antonova, Allegra Laro, Audra McMillan, Lorenz Wolf. 615-635 [doi]
- Adversarially Robust CLIP Models Can Induce Better (Robust) Perceptual MetricsFrancesco Croce, Christian Schlarmann, Naman Deep Singh, Matthias Hein 0001. 636-660 [doi]
- Err on the Side of Texture: Texture Bias on Real DataBlaine Hoak, Ryan Sheatsley, Patrick D. McDaniel. 661-680 [doi]
- ColorSense: A Study on Color Vision in Machine Visual RecognitionMing-Chang Chiu, Yingfei Wang, Derrick Eui Gyu Kim, Pin-Yu Chen, Xuezhe Ma. 681-697 [doi]
- SoK: Fair Clustering: Critique, Caveats, and Future DirectionsJohn Dickerson 0001, Seyed A. Esmaeili, Jamie Morgenstern, Claire Jie Zhang. 698-713 [doi]
- Fair Decentralized LearningSayan Biswas, Anne-Marie Kermarrec, Rishi Sharma 0001, Trinca Thibaud, Martijn de Vos. 714-734 [doi]
- When Mitigating Bias is Unfair: Multiplicity and Arbitrariness in Algorithmic Group FairnessNatasa Krco, Thibault Laugel, Vincent Grari, Jean-Michel Loubes, Marcin Detyniecki. 735-752 [doi]
- Minimax Group Fairness in Strategic ClassificationEmily Diana, Saeed Sharifi-Malvajerdi, Ali Vakilian. 753-772 [doi]
- DART: A Principled Approach to Adversarially Robust Unsupervised Domain AdaptationYunjuan Wang, Hussein Hazimeh 0001, Natalia Ponomareva 0001, Alexey Kurakin, Ibrahim Hammoud, Raman Arora. 773-796 [doi]
- Reliable Evaluation of Adversarial TransferabilityWenqian Yu, Jindong Gu, Zhijiang Li, Philip Torr 0001. 797-810 [doi]
- Hi-ALPS - An Experimental Robustness Quantification of Six LiDAR-based Object Detection Systems for Autonomous DrivingAlexandra Arzberger, Ramin Tavakoli Kolagari. 811-823 [doi]
- Timber! Poisoning Decision TreesStefano Calzavara, Lorenzo Cazzaro, Massimo Vettori. 824-840 [doi]
- SoK: What Makes Private Learning Unfair?Kai Yao, Marc Juarez. 841-857 [doi]
- Differentially Private Active Learning: Balancing Effective Data Selection and PrivacyKristian Schwethelm, Johannes Kaiser, Jonas Kuntzer, Mehmet Yigitsoy, Daniel Rückert, Georgios Kaissis. 858-878 [doi]
- Choosing Public Datasets for Private Machine Learning via Gradient Subspace DistanceXin Gu, Gautam Kamath 0001, Zhiwei Steven Wu. 879-900 [doi]
- Learning with User-Level Differential Privacy Under Fixed Compute BudgetsZachary Charles, Arun Ganesh, Ryan Mckenna, H. Brendan McMahan, Nicole Mitchell, Krishna Pillutla, Keith Rush. 901-920 [doi]
- ML-Based Behavioral Malware Detection Is Far From a Solved ProblemYigitcan Kaya, Yizheng Chen 0001, Marcus Botacin, Shoumik Saha, Fabio Pierazzi, Lorenzo Cavallaro, David A. Wagner, Tudor Dumitras. 921-940 [doi]
- Provably Secure Covert Messaging Using Image-Based Diffusion ProcessesLuke A. Bauer, Wenxuan Bao, Vincent Bindschaedler. 941-955 [doi]
- FairDP: Achieving Fairness Certification with Differential PrivacyKhang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, Linh Thi Xuan Phan, NhatHai Phan. 956-976 [doi]
- Privacy Vulnerabilities in Marginals-based Synthetic DataSteven Golob, Sikha Pentyala, Anuar Maratkhan, Martine De Cock. 977-995 [doi]
- Avoiding Pitfalls for Privacy Accounting of Subsampled Mechanisms Under CompositionChristian Janos Lebeda, Matthew Regehr, Gautam Kamath 0001, Thomas Steinke 0002. 996-1006 [doi]
- Auditing Differential Privacy Guarantees Using Density EstimationAntti Koskela, Jafar Mohammadi. 1007-1026 [doi]