Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization (SDFO) which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially nonsmooth function whose values can only be estimated via stochastic observations. For trial points to be accepted, these algorithms require the estimated function values to yield a sufficient decrease measured in terms of a power larger than 1 of the algoritmic stepsize. Our new tail bound condition is precisely imposed on the reduction estimate used to achieve such a sufficient decrease. This condition allows us to select the stepsize power used for sufficient decrease in such a way that the number of samples needed per iteration is reduced. In previous works, the number of samples necessary for global convergence at every iteration k of this type of algorithm was O(Δ-k4), where Δk is the stepsize or trust-region radius. However, using the new tail bound condition, and under mild assumptions on the noise, one can prove that such a number of samples is only O(Δ-k2-ε), where ε > 0 can be made arbitrarily small by selecting the power of the stepsize in the sufficient decrease test arbitrarily close to 1. In the common random number generator setting, a further improvement by a factor of Δ2k can be obtained. The global convergence properties of the stochastic direct-search and trust-region algorithms are established under the new tail bound condition.

STOCHASTIC TRUST-REGION AND DIRECT-SEARCH METHODS: A WEAK TAIL BOUND CONDITION AND REDUCED SAMPLE SIZING

Rinaldi F.;Zeffiro D.
2024

Abstract

Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization (SDFO) which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially nonsmooth function whose values can only be estimated via stochastic observations. For trial points to be accepted, these algorithms require the estimated function values to yield a sufficient decrease measured in terms of a power larger than 1 of the algoritmic stepsize. Our new tail bound condition is precisely imposed on the reduction estimate used to achieve such a sufficient decrease. This condition allows us to select the stepsize power used for sufficient decrease in such a way that the number of samples needed per iteration is reduced. In previous works, the number of samples necessary for global convergence at every iteration k of this type of algorithm was O(Δ-k4), where Δk is the stepsize or trust-region radius. However, using the new tail bound condition, and under mild assumptions on the noise, one can prove that such a number of samples is only O(Δ-k2-ε), where ε > 0 can be made arbitrarily small by selecting the power of the stepsize in the sufficient decrease test arbitrarily close to 1. In the common random number generator setting, a further improvement by a factor of Δ2k can be obtained. The global convergence properties of the stochastic direct-search and trust-region algorithms are established under the new tail bound condition.
File in questo prodotto:
File Dimensione Formato  
22m1543446.pdf

Accesso riservato

Tipologia: Published (publisher's version)
Licenza: Accesso privato - non pubblico
Dimensione 1.51 MB
Formato Adobe PDF
1.51 MB Adobe PDF Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3530801
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact