

We study unconstrained optimization problems with nonsmooth and convex objective function in the form of a mathematical expectation. The proposed method approximates the expected objective function with a sample average function using Inexact Restoration-based adapted sample sizes. The sample size is chosen in an adaptive manner based on Inexact Restoration. The algorithm uses line search and assumes descent directions with respect to the current approximate function. We prove the a.s. convergence under standard assumptions. Numerical results for two types of problems, machine learning loss function for training classifiers and stochastic linear complementarity problems, prove the efficiency of the proposed scheme. © 2022 Elsevier B.V.
| Engineering controlled terms: | Approximation algorithmsConvex optimizationRestorationSamplingStochastic systems |
|---|---|
| Engineering uncontrolled terms | Inexact restorationMachine-learningNonsmooth optimizationObjective functionsSample average approximationSample sizesStochastic linear complementarity problemStochasticsSubgradientVariable sample size |
| Engineering main heading: | Machine learning |
| Funding sponsor | Funding number | Acronym |
|---|---|---|
| Ministarstvo Prosvete, Nauke i Tehnološkog Razvoja | MPNTR | |
| Provincial Secretariat for Higher Education and Scientific Research, Autonomous Province of Vojvodina | 142-451-2593/2021-01/2 |
The work of Krejić and Krklec Jerinkić is supported by Provincial Secretariat for Higher Education and Scientific Research of Vojvodina , grant no. 142-451-2593/2021-01/2 . The work of Ostojić is supported by the Ministry of Education, Science and Technological Development, Republic of Serbia .
Ostojić, T.; Department of Fundamental Sciences, Faculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovića 6, Novi Sad, Serbia;
© Copyright 2022 Elsevier B.V., All rights reserved.