For statistical modeling wherein the data regime is unfavorable in terms of dimensionality relative to the sample size, finding hidden sparsity in the relationship structure between variables can be critical in formulating an accurate statistical model. The so-called “ norm”, which counts the number of non-zero components in a vector, is a strong reliable mechanism of enforcing sparsity when incorporated into an optimization problem for minimizing the fit of a given model to a set of observations. However, in big data settings wherein noisy estimates of the gradient must be evaluated out of computational necessity, the literature is scant on methods that reliably converge. In this paper, we present an approach towards solving expectation objective optimization problems with cardinality constraints. We prove convergence of the underlying stochastic process and demonstrate the performance on two Machine Learning problems.
Bergamaschi, M., Cristofari, A., Kungurtsev, V., Rinaldi, F. (2026). Probabilistic iterative hard thresholding for sparse learning. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 93(1), 57-83 [10.1007/s10589-025-00724-6].
Probabilistic iterative hard thresholding for sparse learning
Cristofari, Andrea;
2026-01-01
Abstract
For statistical modeling wherein the data regime is unfavorable in terms of dimensionality relative to the sample size, finding hidden sparsity in the relationship structure between variables can be critical in formulating an accurate statistical model. The so-called “ norm”, which counts the number of non-zero components in a vector, is a strong reliable mechanism of enforcing sparsity when incorporated into an optimization problem for minimizing the fit of a given model to a set of observations. However, in big data settings wherein noisy estimates of the gradient must be evaluated out of computational necessity, the literature is scant on methods that reliably converge. In this paper, we present an approach towards solving expectation objective optimization problems with cardinality constraints. We prove convergence of the underlying stochastic process and demonstrate the performance on two Machine Learning problems.| File | Dimensione | Formato | |
|---|---|---|---|
|
unpaywall-bitstream--1884260321.pdf
accesso aperto
Tipologia:
Versione Editoriale (PDF)
Licenza:
Creative commons
Dimensione
2.27 MB
Formato
Adobe PDF
|
2.27 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


