Journal: Comp. Opt. and Appl.

Volume 84, Issue 3

651 -- 702Brian Irwin, Eldad Haber. Secant penalized BFGS: a noise robust quasi-Newton method via penalizing the secant condition
703 -- 735Immanuel M. Bomze, Bo Peng. Conic formulation of QPCCs applied to truly sparse QPs
737 -- 760Jie Zhang, Xinmin Yang, Gaoxi Li, Ke Zhang. 0 regularization problem
761 -- 788Quan Yu, Xinzhen Zhang. T-product factorization based method for matrix and tensor completion problems
789 -- 831Renaud Chicoisne. Computational aspects of column generation for nonlinear and conic optimization: classical and linearized schemes
833 -- 874Naoki Marumo, Takayuki Okuno, Akiko Takeda. Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares
875 -- 919Yitian Qian, Shaohua Pan, Shujun Bi. A matrix nonconvex relaxation approach to unconstrained binary polynomial programs
921 -- 972Claire Boyer, Antoine Godichon-Baggioni. On the asymptotic rate of convergence of Stochastic Newton algorithms and their Weighted Averaged versions
973 -- 1003Shisen Liu, Xiaojun Chen 0001. Lifted stationary points of sparse optimization with complementarity constraints
1005 -- 1033Yonggang Pei, Shaofang Song, Detong Zhu. A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization

Volume 84, Issue 2

295 -- 318José Yunier Bello Cruz, Max L. N. Gonçalves, Nathan Krislock. On FISTA with a relative error rule
319 -- 362Silvia Bonettini, Peter Ochs, Marco Prato, Simone Rebegoldi. An abstract convergence framework with application to inertial inexact forward-backward methods
363 -- 395A. A. Aguiar, Orizon Pereira Ferreira, Leandro da Fonseca Prudente. Inexact gradient projection method with relative error tolerance
397 -- 420Orizon Pereira Ferreira, Geovani Nunes Grapiglia, E. M. Santos, J. C. O. Souza. A subgradient method with non-monotone line search
421 -- 447Jiawang Nie, Suhan Zhong. Loss functions for finite sets
449 -- 476Andrew Butler, Roy H. Kwon. Efficient differentiable quadratic programming layers: an ADMM approach
477 -- 508Zhikai Yang, Le Han. A global exact penalty for rank-constrained optimization problem and applications
509 -- 529Yuning Yang. On global convergence of alternating least squares for tensor approximation
531 -- 572Juan Gao, Xinwei Liu, Yu-Hong Dai, Yakui Huang, Junhua Gu. Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
573 -- 607Serge Gratton, Philippe L. Toint. OFFO minimization algorithms for second-order optimality and their complexity
609 -- 649Kaizhao Sun, X. Andy Sun. A two-level distributed algorithm for nonconvex constrained optimization

Volume 84, Issue 1

1 -- 4Valeria Ruggiero, Gerardo Toraldo. Special issue for SIMAI 2020-2021: large-scale optimization and applications
5 -- 26Laura Antonelli, Valentina de Simone, Marco Viola. Cartoon-texture evolution for two-region image segmentation
27 -- 52Annamaria Barbagallo, Serena Guarino Lo Bianco. A random time-dependent noncooperative equilibrium problem
53 -- 84Stefania Bellavia, Natasa Krejic, Benedetta Morini, Simone Rebegoldi. A stochastic first-order trust-region method with inexact restoration for finite-sum minimization
85 -- 123Silvia Bonettini, Marco Prato, Simone Rebegoldi. A nested primal-dual FISTA-like scheme for composite convex optimization problems
125 -- 149Pasquale Cascarano, Giorgia Franchini, Erich Kobler, Federica Porta, Andrea Sebastiani. Constrained and unconstrained deep image prior optimization models with automatic regularization
151 -- 189Serena Crisci, Federica Porta, Valeria Ruggiero, Luca Zanni. Hybrid limited memory gradient projection methods for box-constrained optimization problems
191 -- 223Dominik Garmatter, Margherita Porcelli, Francesco Rinaldi, Martin Stoll. An improved penalty algorithm using model order reduction for MIPDECO problems with partial observations
225 -- 264Francesco Rinaldi, Damiano Zeffiro. Avoiding bad steps in Frank-Wolfe variants
265 -- 294Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco. From inexact optimization to learning via gradient concentration