Land Weber

Embed Size (px)

Citation preview

  • 7/26/2019 Land Weber

    1/24

    Projected Landweber method and preconditioning

    This article has been downloaded from IOPscience. Please scroll down to see the full text article.

    1997 Inverse Problems 13 441

    (http://iopscience.iop.org/0266-5611/13/2/016)

    Download details:

    IP Address: 130.251.61.251

    The article was downloaded on 13/10/2009 at 13:40

    Please note that terms and conditions apply.

    The Table of Contentsand more related contentis available

    OME | SEARCH | PACS & MSC | JOURNALS | ABOUT | CONTACT US

    http://www.iop.org/Terms_&_Conditionshttp://iopscience.iop.org/0266-5611/13/2http://iopscience.iop.org/0266-5611/13/2/016/relatedhttp://iopscience.iop.org/http://iopscience.iop.org/searchhttp://iopscience.iop.org/pacshttp://iopscience.iop.org/journalshttp://iopscience.iop.org/page/aboutioppublishinghttp://iopscience.iop.org/contacthttp://iopscience.iop.org/contacthttp://iopscience.iop.org/page/aboutioppublishinghttp://iopscience.iop.org/journalshttp://iopscience.iop.org/pacshttp://iopscience.iop.org/searchhttp://iopscience.iop.org/http://iopscience.iop.org/0266-5611/13/2/016/relatedhttp://iopscience.iop.org/0266-5611/13/2http://www.iop.org/Terms_&_Conditions
  • 7/26/2019 Land Weber

    2/24

    Inverse Problems 13 (1997) 441463. Printed in the UK PII: S0266-5611(97)76476-1

    Projected Landweber method and preconditioning

    M Piana and M Bertero

    INFM and Dipartimento di Fisica dellUniversita di Genova, via Dodecaneso 33, I-16146

    Genova, Italy

    Received 12 July 1996

    Abstract. The projected Landweber method is an iterative method for solving constrained

    least-squares problems when the constraints are expressed in terms of a convex and closed set

    C. The convergence properties of the method have been recently investigated. Moreover, ithas important applications to many problems of signal processing and image restoration. The

    practical difficulty is that the convergence is too slow. In this paper we apply to this method the

    so-called preconditioning which is frequently used for increasing the efficiency of the conjugate

    gradient method. We discuss the significance of preconditioning in this case and we show that

    it implies a modification of the original constrained least-squares problem. However, when the

    original problem is ill-posed, the approximate solutions provided by the preconditioned method

    are similar to those provided by the standard method if the preconditioning is suitably chosen.

    Moreover, the number of iterations can be reduced by a factor of 10 and even more. A few

    applications to problems of image restoration are also discussed.

    1. Introduction

    In the theory of linear inverse problems, the use of a priori information is basic for

    defining approximate solutions which are stable and physically sound. In many instances

    this information can be exploited by requiring that the solution belongs to a closed and

    convex set C.Assume that the linear inverse problem consists in solving the equation

    Af= g (1.1)where A : X Y is a linear and continuous operator, X and Y are Hilbert spaces andg Y is the datum of the problem, corrupted by noise or experimental errors. In otherwords g has the following structure:

    g=Af(0) + w (1.2)where f(0) is the function to be estimated and w is a term describing the noise. If the

    range of A, R(A), is not closed, then problem (1.1), in general, has no solution when ghas the structure (1.2), because w does not belong to R(A). Moreover, if f(0) belongs tosome closed and convex set C, the use of linear regularization methods does not provide

    approximate solutions which belong to C.In such a case one can reformulate the original problem (1.1) as a constrained least-

    squares problem:

    Af gY= minimum f C. (1.3)

    0266-5611/97/020441+23$19.50 c 1997 IOP Publishing Ltd 441

  • 7/26/2019 Land Weber

    3/24

    442 M Piana and M Bertero

    When R(A) is not closed this problem can still be ill-posed. The existence of solutionsdepends on the set Cand we have the following cases. Let A(C) be the image ofC (A(C)is also a convex set).

    Problem (1.3) has a solution for any gYif and only if the set A(C)is closed. Thiscondition is satisfied, for instance, when the set Cis bounded, a case considered in one ofthe pioneering papers on ill-posed problems [1].

    If A(C) is not closed, then problem (1.3) has a solution if and only if the convexprojection of g onto A(C) belongs to A(C). IfC is, for instance, the set of non-negativefunctions in L2(R), then A(C), in general, is not closed.

    It may be interesting to note that the problem of constrained regularized solutions, i.e.

    Af g2Y+ f2X=minimum f C (1.4)

    can also be formulated as a constrained least-squares problem. In fact, ifZ= Y X andA : XZ is defined by Af= {Af, f}, then (1.4) can be rewritten as follows:

    Af gZ=minimum f C (1.5)

    where g= {g, 0} Z. This problem, which has been investigated by Neubauer [2], hasinteresting applications in image restoration, as proved by Lagendijket al [3], if one wishes

    to reduce the ringing effects, typical of linear methods, by the use of the positivity constraint.Since the operator A has a bounded inverse, problem (1.5) is well-posed.

    The projected Landweber method is an iterative method which can be used for

    approximating the solutions of the above-mentioned problems. Its convergence and

    regularization properties have been investigated by Eicke [4]. It can also be easily

    implemented if the projection operator over the convex and closed set C, PC, can be easilycomputed. A list of computable convex projections which occur frequently in many signal-

    processing applications and image restoration problems is provided by Youla and Webb

    [5].

    It follows from many numerical simulations and practical applications (we will also

    give examples in this paper) that this method can provide much better estimates than the

    usual linear regularization methods. The practical difficulty is that the convergence is too

    slow, i.e. too many iterations are required to obtain the best approximation. For this reason

    we investigate the possibility of applying to the projected Landweber method the so-calledpreconditioning which is frequently used for increasing the efficiency of the conjugate

    gradient method [6]. We will find that, in fact, the use of preconditioning increases the

    convergence speed even if it implies a modification of the original least-squares problem.

    The plan of the paper is as follows. In section 2 we summarize the main results and open

    problems concerning the projected Landweber method. In section 3 we briefly describe a

    method introduced by Strand [7] for increasing the speed of convergence of the Landweber

    method, we show that this is one of the various possible forms of preconditioning and

    we prove that this can also be considered as the application of the Landweber method to

    a modified least-squares problem. However, in the absence of constraints, this modified

    problem is equivalent to the original one. This result is no longer true in the case of

    constraints, so that the projected Landweber method when applied to the modified least-

    squares problem can provide different results. This point is discussed in section 4 where

    we also give two rather natural forms of preconditioning. We show by means of numerical

    examples that, in such a way, one gets results similar to those provided by the standard

    projected Landweber method but with a much smaller number of iterations. Finally, in

    section 5 we discuss a few applications to problems in image restoration.

  • 7/26/2019 Land Weber

    4/24

    Projected Landweber method and preconditioning 443

    2. The projected Landweber method: state of the art and open problems

    Let us consider first the non-constrained least-squares problem associated with the first-kind

    equation (1.1)

    Af gY= minimum. (2.1)

    Let us denote by N(A) the null space ofA. IfR(A)is not closed, then the problem (2.1)is ill-posed. Least-squares solutions exist if and only if g R(A)R(A). In such acase the set of least-squares solutions coincides with the set of the solutions of the Euler

    equation

    AAf= Ag (2.2)

    where A denotes the adjoint operator of A. The minimal norm solution is the so-calledgeneralized solution f which is also the unique least-squares solution belonging to N(A).The generalized inverse A is the linear operator with domain D(A)= R(A) R(A)and range R(A)=N(A), defined by [8]

    f =Ag. (2.3)

    If we introduce the nonlinear operator G: XX

    G(f )=f+ (Ag AAf ) (2.4)

    then it is obvious that the set of solutions of problem (2.1) coincides with the set of the fixed

    points of the operator G. Moreover, if the relaxation parameter satisfies the conditions

    0