Abstract is missing.
- How (not) to ensemble LVLMs for VQALisa Alazraki, Lluís Castrejón, Mostafa Dehghani 0001, Fantine Huot, Jasper R. R. Uijlings, Thomas Mensink. 1-20 [doi]
- Can Visual Scratchpads With Diagrammatic Abstractions Augment LLM Reasoning?Joy Hsu, Gabriel Poesia, Jiajun Wu 0001, Noah D. Goodman. 21-28 [doi]
- Filter bubbles and affective polarization in user-personalized large language model outputsTomo Lazovich. 29-37 [doi]
- Are large language models good annotators?Jay Mohta, Kenan E. Ak, Yan Xu, Mingwei Shen 0001. 38-48 [doi]
- Self-Evaluation Improves Selective Generation in Large Language ModelsJie Ren 0006, Yao Zhao, Tu Vu, Peter J. Liu, Balaji Lakshminarayanan. 49-64 [doi]
- Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU MonthsFady Rezk, Antreas Antoniou, Henry Gouk, Timothy M. Hospedales. 65-83 [doi]
- Exploring Social Bias in Downstream Applications of Text-to-Image Foundation ModelsAdhithya Prakash Saravanan, Rafal Kocielnik, Roy Jiang, Pengrui Han, Anima Anandkumar. 84-102 [doi]
- Adversarial Attacks and Defenses in Large Language Models: Old and New ThreatsLeo Schwinn, David Dobre, Stephan Günnemann, Gauthier Gidel. 103-117 [doi]
- The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language ModelsChenwei Wu 0002, Li Erran Li, Stefano Ermon, Patrick Haffner, Rong Ge 0001, Zaiwei Zhang. 118-126 [doi]
- Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image GenerationYuhui Zhang, Brandon McKinzie, Zhe Gan, Vaishaal Shankar, Alexander Toshev. 127-133 [doi]