Variational models for image deblurring problems typically consist of a smooth term and a potentially non-smooth convex term. A common approach to solving these problems is using proximal gradient methods. To accelerate the convergence of these first-order iterative algorithms, strategies such as variable metric methods have been introduced in the literature. In this paper, we prove that, for image deblurring problems, the variable metric strategy proposed in Aleotti et al. (Comput. Optim. Appl., 2024) can be reinterpreted as a right preconditioning method. Consequently, we explore an inexact left-preconditioned version of the same proximal gradient method. We prove the convergence of the new iteration to the minimum of a variational model where the norm of the data fidelity term depends on the preconditioner. The numerical results show that left and right preconditioning are comparable in terms of the number of iterations required to reach a prescribed tolerance, but left preconditioning needs much less CPU time, as it involves fewer evaluations of the preconditioner matrix compared to right preconditioning. The quality of the computed solutions with left and right preconditioning are comparable. Finally, we propose some non-stationary sequences of preconditioners that allow for fast and stable convergence to the solution of the variational problem with the classical ℓ2–norm on the fidelity term.

A Preconditioned Version of a Nested Primal-Dual Algorithm for Image Deblurring

Aleotti S.;Donatelli M.;Scarlato G.
2025-01-01

Abstract

Variational models for image deblurring problems typically consist of a smooth term and a potentially non-smooth convex term. A common approach to solving these problems is using proximal gradient methods. To accelerate the convergence of these first-order iterative algorithms, strategies such as variable metric methods have been introduced in the literature. In this paper, we prove that, for image deblurring problems, the variable metric strategy proposed in Aleotti et al. (Comput. Optim. Appl., 2024) can be reinterpreted as a right preconditioning method. Consequently, we explore an inexact left-preconditioned version of the same proximal gradient method. We prove the convergence of the new iteration to the minimum of a variational model where the norm of the data fidelity term depends on the preconditioner. The numerical results show that left and right preconditioning are comparable in terms of the number of iterations required to reach a prescribed tolerance, but left preconditioning needs much less CPU time, as it involves fewer evaluations of the preconditioner matrix compared to right preconditioning. The quality of the computed solutions with left and right preconditioning are comparable. Finally, we propose some non-stationary sequences of preconditioners that allow for fast and stable convergence to the solution of the variational problem with the classical ℓ2–norm on the fidelity term.
2025
Convex optimization; Ill-posed problems; Image deblurring; Preconditioning
Aleotti, S.; Donatelli, M.; Krause, R.; Scarlato, G.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11383/2200753
 Attenzione

L'Ateneo sottopone a validazione solo i file PDF allegati

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact