最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 正文

Tikhonov regularization with nonnegativity constra

来源:动视网 责编:小OO 时间:2025-09-26 19:57:21
文档

Tikhonov regularization with nonnegativity constra

ElectronicTransactionsonNumericalAnalysis.Volume18,pp.153-173,2004.Copyright©2004,KentStateUniversity.ISSN1068-9613.ETNAKentStateUniversityetna@mcs.kent.eduTIKHONOVREGULARIZATIONWITHNONNEGATIVITYCONSTRAINTD.CALVETTI,B.LEWIS,L.REICHEL,ANDF.SGALLARIAb
推荐度:
导读ElectronicTransactionsonNumericalAnalysis.Volume18,pp.153-173,2004.Copyright©2004,KentStateUniversity.ISSN1068-9613.ETNAKentStateUniversityetna@mcs.kent.eduTIKHONOVREGULARIZATIONWITHNONNEGATIVITYCONSTRAINTD.CALVETTI,B.LEWIS,L.REICHEL,ANDF.SGALLARIAb
Electronic Transactions on Numerical Analysis. Volume 18, pp. 153-173, 2004.

Copyright©2004, Kent State University. ISSN 1068-9613.

ETNA

Kent State University etna@mcs.kent.edu

TIKHONOV REGULARIZATION WITH NONNEGATIVITY CONSTRAINT

D.CALVETTI,B.LEWIS,L.REICHEL,AND F.SGALLARI

Abstract.Many numerical methods for the solution of ill-posed problems are based on Tikhonov regulariza-tion.Recently,Rojas and Steihaug[15]described a barrier method for computing nonnegative Tikhonov-regularized approximate solutions of linear discrete ill-posed problems.Their method is based on solving a sequence of param-eterized eigenvalue problems.This paper describes how the solution of parametrized eigenvalue problems can be avoided by computing bounds that follow from the connection between the Lanczos process,orthogonal polynomials and Gauss quadrature.

Key words.ill-posed problem,inverse problem,solution constraint,Lanczos methods,Gauss quadrature.

AMS subject classifications.65F22,65F10,65R30,65R32,65R20.

1.Introduction.The solution of large-scale linear discrete ill-posed problems contin-

ues to receive considerable attention.Linear discrete ill-posed problems are linear systems of

equations

(1.1)

with a matrix of ill-determined rank.In particular,has singular values that“cluster”at

the origin.Thus,is severely ill-conditioned and may be singular.We allow.The right-hand side vector of linear discrete ill-posed problems that arise in the applied sciences

and engineering typically is contaminated by an error,which,e.g.,may stem from

measurement errors.Thus,,where is the unknown error-free right-hand side vector associated with.

We would like to compute a solution of the linear discrete ill-posed problem with error-

free right-hand side,

(1.2)

If is singular,then we may be interested in computing the solution of minimal Euclidean

norm.Let denote the desired solution of(1.2).We will refer to as the exact solution.

Let denote the Moore-Penrose pseudo-inverse of.Then is the least-

squares solution of minimal Euclidean norm of(1.1).Due to the error in and the ill-conditioning of,the vector generally satisfies

(1.3)

and then is not a meaningful approximation of.Throughout this paper denotes the

Euclidean vector norm or the associated induced matrix norm.We assume that an estimate of ,denoted by,is available and that the components of are known to be nonnegative. We say that the vector is nonnegative,and write.For instance,we may be able to154Tikhonov regularization with nonnegativity constraint

determine from knowledge of the norm of the solution of a related problem already solved, or from physical properties of the inverse problem to be solved.Recently,Ahmad et al.[1] considered the solution of inverse electrocardiography problems and advocated that known constraints on the solution,among them a bound on the solution norm,be imposed,instead of regularizing by Tikhonov’s method.

The matrix is assumed to be so large that its factorization is infeasible or undesirable. The numerical methods for computing an approximation of discussed in this paper only require the evaluation of matrix-vector products with and its transpose.

Rojas and Steihaug[15]recently proposed that an approximation of be determined by solving the constrained minimization problem

(1.4)

and they presented a barrier function method for the solution of(1.4).

Let denote the orthogonal projection of onto the set

(1.5)

i.e.,we obtain by setting all negative entries of to zero.In view of(1.3),it is reasonable to assume that the inequality

(1.6)

holds.Then the minimization problems(1.4)and

(1.7)

have the same solution.Thus,for almost all linear discrete ill-posed problems of interest, the minimization problems(1.4)and(1.7)are equivalent.Indeed,the numerical method described by Rojas and Steihaug[15,Section3]solves the problem(1.7).

The present paper describes a new approach to the solution of(1.7).Our method makes use of the connection between the Lanczos process,orthogonal polynomials,and quadrature rules of Gauss-type to compute upper and lower bounds for certain functionals.This connec-tion makes it possible to avoid the solution of large parameterized eigenvalue problems.A nice survey of how the connection between the Lanczos process,orthogonal polynomials,and Gauss quadrature can be exploited to bound functionals is provided by Golub and Meurant [6].

Recently,Rojas and Sorensen[14]proposed a method for solving the minimization prob-lem

(1.8)

without nonnegativity constraint,based on the LSTRS method.LSTRS is a scheme for the solution of large-scale quadratic minimization problems that arise in trust-region methods for optimization.The LSTRS method expresses the quadratic minimization problem as a parameterized eigenvalue problem,whose solution is determined by an implicitly restarted Arnoldi method.Matlab code for the LSTRS method has been made available by Rojas[13].

The solution method proposed by Rojas and Steihaug[15]for minimization problems of the form(1.7)with nonnegativity constraint is an extension of the scheme used for theD.Calvetti,B.Lewis,L.Reichel,and F.Sgallari155 solution of minimization problems of the form(1.8)without nonnegativity constraint,in the sense that the solution of(1.7)is computed by solving a sequence of minimization problems of the form(1.8).Rojas and Steihaug[15]solve each one of the latter minimization problems by applying the LSTRS method.

Similarly,our solution method for(1.7)is an extension of the scheme for the solution of (1.9)

described in[5],because an initial approximate solution of(1.7)is determined byfirst solv-ing(1.9),using the method proposed in[5],and then setting negative entries in the computed solution to zero.Subsequently,we determine improved approximate solutions of(1.7)by solving a sequence of minimization problems without nonnegativity constraint of a form closely related to(1.9).The methods used for solving the minimization problems without nonnegativity constraint are modifications of a method presented by Golub and von Matt[7]. We remark that(1.6)yields,and the latter inequality implies that the minimization problems(1.8)and(1.9)have the same solution.

This paper is organized as follows.Section2reviews the numerical scheme described in[5]for the solution of(1.9),and Section3presents an extension of this scheme,which is applicable to the solution of the nonnegatively constrained problem(1.7).A few numerical examples with the latter scheme are described in Section4,where also a comparison with the method of Rojas and Steihaug[15]is presented.Section5contains concluding remarks.

Ill-posed problems with nonnegativity constraints arise naturally in many applications, e.g.,when the components of the solution represent energy,concentrations of chemicals,or pixel values.The importance of these problems is seen by the many numerical methods that recently have been proposed for their solution,besides the method by Rojas and Steihaug [15],see also Bertero and Boccacci[2,Section6.3],Hanke et al.[8],Nagy and Strakos[11], and references therein.Code for some methods for the solution of nonnegatively constrained least-squares problems has been made available by Nagy[10].There probably is not one best method for all large-scale nonnegatively constrained ill-posed problems.It is the purpose of this paper to describe a variation of the method by Rojas and Steihaug[15]which can reduce the computational effort for some problems.

2.Minimization without nonnegativity constraint.In order to be able to compute

a meaningful approximation of the minimal-norm least-squares solution of(1.2),given

,the linear system(1.1)has to be modified to be less sensitive to the error in. Such a modification is commonly referred to as regularization,and one of the most popular regularization methods is due to Tikhonov.In its simplest form,Tikhonov regularization replaces the solution of the linear system of equations(1.1)by the solution of the Tikhonov equations

(2.1)

For each positive value of the regularization parameter,equation(2.1)has the unique solu-tion

(2.2)

It is easy to see that156Tikhonov regularization with nonnegativity constraint

These limits generally do not provide meaningful approximations of.Therefore the choice of a suitable bounded positive value of the regularization parameter is essen-tial.The value of determines how sensitive the solution of(2.1)is to the error, how large the discrepancy is,and how close is to the desired solution of (1.2).For instance,the matrix is more ill-conditioned,i.e.,its condition number

is larger,the smaller is.Hence,the solution is more sensitive to the error,the smaller is.

The following proposition establishes the connection between the minimization problem (1.9)and the Tikhonov equations(2.1).

P ROPOSITION2.1.([7])Assume that.Then the constrained minimization problem(1.9)has a unique solution of the form(2.2)with.In particular, (2.3)

Introduce the function

(2.4)

P ROPOSITION2.2.([5])Assume that.The function(2.4)can be expressed as (2.5)

which shows that is strictly decreasing and convex for.Moreover,the equation (2.6)

has a unique solution,such that,for any that satisfies.

We would like to determine the solution of the equation

(2.7)

Since the value of,in general,is only an available estimate of,it is typically not meaningful to compute a very accurate approximation of.We outline how a few steps of Lanczos bidiagonalization applied to yield inexpensively computable upper and lower bounds for.These bounds are used to determine an approximation of.

Application of steps of Lanczos bidiagonalization to the matrix with initial vector yields the decompositions

(2.8)

where and satisfy and.Further, consists of thefirst columns of and.Throughout this paper denotes the identity matrix and is the th axis vector.The matrix is bidiagonal,

.. ... .

(2.9)

with positive subdiagonal entries;denotes the leading submatrix of .The evaluation of the partial Lanczos bidiagonalization(2.8)requires matrix-vectorD.Calvetti,B.Lewis,L.Reichel,and F.Sgallari157 product evaluations with both the matrices and.We tacitly assume that the number of Lanczos bidiagonalization steps is small enough so that the decompositions(2.8)with the stated properties exist with.If vanishes,then the development simplifies;see [5]for details.

Let denote the QR-factorization of,i.e.,

has orthonormal columns and is upper bidiagonal.Let denote the leading submatrix of,and introduce the functions

(2.10)

(2.11)

defined for.Using the connection between the Lanczos process and orthogonal poly-nomials,the functions can be interpreted as Gauss-type quadrature rules associated with an integral representation of.This interpretation yields

(2.12)

details are presented in[5].Here we just remark that the factor in(2.10)and(2.11) can be computed as,where the right-hand side is defined by(2.8)and(2.9).

We now turn to the zero-finder used to determine an approximate solution of(2.7).Eval-uation of the function for several values of can be very expensive when the matrix is large.Our zero-finder only requires evaluation of the functions and of the derivative158Tikhonov regularization with nonnegativity constraint

P ROPOSITION2.3.([5])The function,defined by(2.11),is strictly decreasing and convex for.The equation

(2.16)

has a unique solution,such that,for any that satisfies.

The proposition shows that equation(2.16)has a unique solution whenever equation(2.7) has one.Let the initial approximation of satisfy.We use the quadratically convergent zero-finder by Golub and von Matt[7,equations(75)-(78)]to determine a mono-tonically decreasing sequence of approximations,,of.The iterations with the zero-finder are terminated as soon as an approximation,such that

how many steps of the Lanczos process to carry out.Analogously to Rojas and Steihaug [15],we introduce the function

where

i.e.,.Here and below.For and fixed,we solve the quadratic minimization problem with respect to,

This is a so-called trust-region subproblem associated with the minimization problem(3.2). Letting yields the equivalent quadratic minimization problem with respect to,For each parameter value and diagonal matrix with positive diagonal entries, the linear system of equations(3.4)(i)is of a similar type as(2.1).The solution depends on the parameter,and we therefore sometimes denote it by.Introduce the function

Equation(3.4)(i)yields

(3.5)

and we determine a value of,such that equation(3.4)(ii)is approximately satisfied,by computing upper and lower bounds for the right-hand side of(3.5)using the Lanczos process, analogously to the approach of Section2.Application of steps of Lanczos tridiagonalization to the matrix with initial vector yields the decomposition (3.6)

where and satisfy

The matrix is symmetric and tridiagonal,and since is positive definite for,so is.

Assume that,otherwise the formulas simplify,and introduce the symmetric tridi-agonal matrix with leading principal submatrix,last sub-and super-diagonal entries,and last diagonal entry chosen so that is positive semidefinite with one zero eigenvalue.The last diagonal entry can be computed in arithmeticfloat-ing point operations.

Introduce the functions,analogous to(2.10)and(2.11),

(3.7)

(3.8)

Similarly as in Section2,the connection between the Lanczos process and orthogonal poly-nomials makes it possible to interpret the functions as Gauss-type quadrature rules associated with an integral representation of the right-hand side of(3.5).This interpretation yields,analogously to(2.12),

(3.9)

see[4]for proofs of these inequalities and for further properties of the functions.

Let be the same constant as in equation(2.14).Similarly as in Section2,we seek to determine a value of,such that

(3.10)

This condition replaces equation(3.4)(ii)in our numerical method.We determine a pair ,such that

(3.11)

The value of so obtained satisfies(3.10),because it satisfies both(3.9)and(3.11).For many linear systems of equations(3.4)(i),the inequalities(3.11)can be satisfied already for fairly small values of.For,until a sufficiently accurate approximation of(1.7)has been found, we determine the largest zero,denoted by,of the function

(3.13)

cf.(2.17).The purpose of the term

(3.17)

Let be a user-specified constant,and introduce the vector

(3.18)and the matrix.The purpose of the parameter is to avoid that has components very close to zero,since this would make very large.Following Rojas and Steihaug[15],we compute

(3.19)

and update the value of the barrier parameter according to

The computations are terminated and the vector given by(3.16)is accepted as an approxi-mate solution of(1.7),as soon as wefind a vector,defined by(3.18),which satisfies at least one of the conditions

(3.21)

Here is defined by(3.19),and,and are user-supplied constants.

We briefly comment on the determination of the matrix and parameter in thefirst system of equations(3.4)(i)that we solve.Before solving(3.4)(i),we compute an approximate solution,given by(2.20),of the minimization problem(1.9)as described in Section2.Let denote the corresponding value of the regularization parameter, and let be the orthogonal projection of onto the set(1.5).If is a sufficiently ac-curate approximate solution of(1.7),then we are done;otherwise we improve by the method described in this section.Define by(3.18)with and replaced by,let ,and let be given by(3.20)with and. We now can determine and solve(3.4).The following algorithm summarizes how the com-putations are organized.

A LGORITHM3.1.Constrained Tikhonov Regularization

1.Input:,,,,,,.

Output:,,approximate solution of(1.7).

2.Apply the method of Section2to determine an approximate solution of(1.9).

Compute the associated orthogonal projection onto the set(1.5)and let.

If is a sufficiently accurate approximate solution of(1.7),then exit.

3.Determine the initial positive approximate solution by(3.18),let

,and compute as described above.Define the linear system

(3.4)(i).Let.

4.Compute the approximate solution,given by(3.15),of the linear system(3.4)(i)

with and the number of Lanczos tridiagonalization steps,chosen so that the inequalities(3.13)and(3.14)hold.

5.Determine according to(3.16)with given by(3.17),and using(3.18).

Compute by(3.19).If the pair satisfies one of the inequalities(3.21), then accept the vector as an approximate solution of(1.7)and exit.

6.Let and let be given by(3.20).Define a new linear

system of equations(3.4)(i)using.Let.Goto4.4.Computed examples.We illustrate the performance of Algorithm3.1when applied

to a few typical linear discrete ill-posed problems,such that the desired solution of the associated linear systems of equations with error-free right-hand side(1.2)is known to be

nonnegative.All computations were carried out using Matlab with approximately signif-

icant decimal digits.In all examples,we let and chose the initial values and.Then,i.e.,is larger than,the largest zero of. The error vectors used in the examples have normally distributed random entries with zero

mean and are scaled so that is of desired norm.

F IG.4.1.Example4.1:Solution of the error-free linear system(1.2)(blue curve),approximate solution determined without imposing nonnegativity in Step2of Algorithm3.1(black curve),projected approximate solution determined in Step2of Algorithm3.1(magenta curve),and approximate solution determined by Steps4-6of Algorithm3.1(red curve).

Example4.1.Consider the Fredholm integral equation of thefirst kind

(4.1)

discussed by Phillips[12].Its solution,kernel and right-hand side are given by

F IG.4.2.Example4.1:Blow-up of Figure4.1.

determines a discretization of the solution(4.2).We consider this discretization the exact solution.An error vector is added to to give the right-hand side of(1.1)with relative error.This corresponds to.

Let and.Then Step2of Algorithm3.1yields an approximate solution,defined by(2.20),using only Lanczos bidiagonalization steps;thus only matrix-vector product evaluations are required with each one of the matrices and. Since the vector is not required to be nonnegative,it represents an oscillatory approxi-mate solution of(1.2);see the black curves in Figures4.1and4.2.The relative error in is

.

Let denote the orthogonal projection of onto the set(1.5).The magenta curves in Figures4.1and4.2display.Note that agrees with for nonnegative values.Clearly, is a better approximation of than;we have.

Let the coefficients in the stopping criteria(3.21)for Algorithm3.1be given by ,,and.These are the values used in[15].Let

in(3.18).These parameters are required in Step5of Algorithm3.1.The red curves of Figures 4.1and4.2show the approximate solution determined by Steps4-6of the algorithm.The computation of required the execution of each of these steps twice,at the cost of Lanczos tridiagonalization steps thefirst time,and Lanczos tridiagonalization steps the second time. In total,matrix-vector product evaluation were required for the computations of.This includes work for computing.The relative error is smaller than for;this is also obvious form Figure4.2.Specifically,the method of Section3gives a nonnegative approximate solution,whose error is about of the error in.

Example4.2.This example differs from Example4.1only in that no error vector is added to the right-hand side vector determined by the code phillips.Thus,.Rojas and Steihaug[15]have also considered this example.In the absence of an error in,other than round-off errors,fairly stringent stopping criteria should be used.Letting yields

F IG.4.3.Example4.2:Solution of the error-free linear system(1.2)(blue curve),approximate solution determined without imposing nonnegativity in Step2of Algorithm3.1(black curve),projected approximate solution determined in Step2of Algorithm3.1(magenta curve),and approximate solution determined by Steps4-6of Algorithm3.1(red curve).

after Lanczos bidiagonalization steps the approximate solution in Step2of Algorithm3.1. The relative error in is.The associated orthogonal projection onto(1.5)has relative error.The computation of and requires the evaluation of matrix-vector products with and matrix-vector products with .Rojas and Steihaug[15]report the approximate solution determined by their method to have a relative error of and its computation to require the evaluation of matrix-vector products.

The approximate solution can be improved by the method of Section3,however,at a fairly high price.Using,,,and, we obtain the approximate solution with relative error.The evaluation of requires the computation of matrix-vector products,with at most consecutive Lanczos tridiagonalization steps.

We conclude that when the relative error in is fairly large,such as in Example4.1, the method of the present paper can determine an approximate solution with significantly smaller error than the projected approximate solution.However,when the relative error in is small,then the method of the present paper might only give minor improvements at high cost.

Example4.3.Consider the blur-and noise-free image shown in Figure4.5.Thefigure de-picts three hemispheres,but because of the scaling of the axes,they look like hemi-ellipsoids. The image is represented by pixels,whose values range from to.The pixel values are stored row-wise in the vector,which we subsequently scale to have unit length.After scaling,the largest entry of is.The image in Figure4.5is assumed not to be available,only and a contaminated version of the image,displayed in Figure4.6,are known.We would like to restore the available image in Figure4.6to obtain

F I

G .4.4.Example 4.2:Blow-up of Figure 4.3.

F I

G .4.5.Example 4.3:Blur-and noise-free image.

(an approximation of)the image in Figure 4.5.

The image in Figure 4.6is contaminated by noise and blur,with the blurring operator represented by a nonsymmetric Toeplitz matrix of ill-determined rank;thus,is numerically singular.Due to the special structure of ,only the first row and column have to be stored.Matrix-vector products with and are computed by using the fast

F IG.4.6.Example4.3:Blurred and noisy image.

F IG.4.7.Example4.3:Computed approximate solution without positivity constraint(2.20). Fourier transform.The vector represents a blurred but noise-free image.The error vector represents noise and has normally distributed entries with zero mean.The vector is scaled to yield the relative error in the available right-hand side vector.The latter represents the blurred and noisy image shown in Figure4.6.The largest entry of is.

The method of Section2with requires Lanczos bidiagonalization steps

F IG.4.8.Example4.3:Projected computed approximate solution without nonnegativity constraint.

F IG.4.9.Example4.3:Computed approximate solution with nonnegativity constraint(3.16).

to determine the vector,given by(2.20)and displayed in Figure4.7,with relative error

.Note the oscillations around the bases of the hemispheres.The nonnegative vector,obtained by projecting onto(1.5),is shown in Figure4.8.It has relative error.The largest entry of is.

The oscillations around the hemispheres can be reduced by the method of Section3. With,,,and,we obtain the vector,given by(3.16),with relative error.Figure4.9depicts.Note that the oscillations around the bases of the hemispheres are essentially gone.The largest component of is.The computation of required the completion of Steps 4and5of Algorithm3.1once,and demanded computation of Lanczos tridiagonalization steps.The evaluation of required that a total of matrix-vector products with or

be computed,including the matrix-vector product evaluations needed to compute.

50

100

150

200

250

50100150200250

F IG.4.10.Example4.4:Blur-and noise-free image.

50

100

150

200

250

50100150200250

F IG.4.11.Example4.4:Blurred and noisy image.50

100

150

200

250

50100150200250

F IG.4.12.Example4.4:Computed approximate solution without positivity constraint(2.20).

50

100

150

200

250

50100150200250

F IG.4.13.Example4.4:Projected computed approximate solution without nonnegativity constraint.

Example4.4.The data for this example was developed at the US Air Force Phillips Laboratory and has been used to test the performance of several available algorithms for computing regularized nonnegative solutions.The data consists of the noise-and blur-free image of the satellite shown in Figure4.10,the point spread function which defines the blur-ring operator,and the blurred and noisy image of the satellite displayed in Figure4.11.The

D.Calvetti,B.Lewis,L.Reichel,and F.Sgallari171

50

100

150

200

250

50100150200250

F IG.4.14.Example4.4:Computed approximate solution with nonnegativity constraint(3.16). images are represented by pixels.The pixel values for the noise-and blur-free image and for the contaminated image are stored in the vectors and,respectively,where is the right-hand side of(1.1).Thus,and are of dimension.The matrix in(1.1)represents the blurring operator and is determined by the point spread function; is a block-Toeplitz matrix with Toeplitz blocks of size size.The matrix is not explicitly stored;matrix-vector products with and are evaluated by the fast Fourier transform.Similarly as in Example4.3,the vector represents a blurred noise-free image.We consider the difference to be noise,and found that

and.Thus is contaminated by a significant amount of noise.

Let,and assume that the blur-and noise-free image of Figure4.10is not available.Given,and,we would like to determine an approximation of this image.We let in Algorithm3.1.

The method of Section2requires Lanczos bidiagonalization steps to determine the vector,given by(2.20)and shown in Figure4.12.The relative error in is .The projection of onto the set(1.5)has relative error. As expected,this error is smaller than the relative error in.Figure4.13displays.We remark that the gray background in Figure4.12is of no practical significance.It is caused by negative entries in the vector.This vector is rescaled by Matlab to have entries in the interval[0,255]before plotting.In particular,zero entries are mapped to a positive pixel value and are displayed in gray.

The accuracy of the approximate solution can be improved by the method of Section 3.We use the same values of the parameters,,,and as in Example4.3.After execution of Steps4-6of Algorithm3.1followed by Steps4-5,a termination condition is satisfied,and the algorithm yields with relative error.Figure4.14 shows.Step4requires the evaluation of Lanczos tridiagonalization steps thefirst time, and Lanczos tridiagonalization steps the second time.The computation of demands a total of matrix-vector product evaluations with either or,including the matrix-172Tikhonov regularization with nonnegativity constraint

vector products required to determine and.

The relative error in is smaller than the relative errors in computed approximations of determined by several methods recently considered in the literature.For instance,the smallest relative error achieved by the methods presented in[8]is,and the relative error reported in[15]is.

We conclude this section with some comments on the storage requirement of the method of the present paper.The implementation used for the numerical examples reorthogonalizes the columns of the matrices,,and in the decompositions(2.8)and(3.6).This secures numerical orthonormality of the columns and may somewhat reduce the number of matrix-vector products required to solve the problems(compared with no reorthogonaliza-tion).However,reorthogonalization requires storage of all the generated columns.For in-stance in Example4.3,Lanczos bidiagonalization with reorthogomalization requires storage of and,and Lanczos tridiagonalization with reorthogonalization requires storage of .The latter matrix may overwrite the former.We generally apply reorthogonalization when it is important to keep the number of matrix-vector product evaluations as small as possible,and when sufficient computer storage is available for the matrices,,and. The effect of loss of numerical orthogonality,that may arise when no reorthogonalization is carried out,requires further study.Without reorthogonalization,the method of the present paper can be implemented to require storage of only a few columns of,,and si-multaneously,at the expense of having to compute the matrices,,and twice.The scheme by Rojas and Steihaug[15]is based on the implicitly restarted Arnoldi method,and therefore its storage requirement can be kept below a predetermined bound.

5.Conclusion.The computed examples illustrate that our numerical method for Tikhonov regularization with nonnegativity constraint can give a more pleasing approximate solution of the exact solution than the scheme of Section2,when the latter gives an oscil-latory solution.Our method is closely related to a scheme recently proposed by Rojas and Steihaug[15].A careful comparison between their method and ours is difficult,because few details on the numerical experiments are provided in[15].Nevertheless,we feel that our approach to Tikhonov regularization with nonnegativity constraint based on the connection between orthogonal polynomials,Gauss quadrature and the Lanczos process,is of indepen-dent interest.Moreover,Examples4.2and4.4indicate that our method may be competitive.

REFERENCES

[1]G.F.A HMAD,D.H.B ROOKS,AND R.S.M AC L EOD,An admissible solution approach to inverse electro-

cardiography,Ann.Biomed.Engrg,26(1998),pp.278–292.

[2]M.B ERTERO AND P.B OCCACCI,Introduction to Inverse Problems in Imaging,Institute of Physics Publish-

ing,Bristol,1998.

[3] D.C ALVETTI,G.H.G OLUB,AND L.R EICHEL,Estimation of the L-curve via Lanczos bidiagonalization,

BIT,39(1999),pp.603–619.

[4] D.C ALVETTI AND L.R EICHEL,Gauss quadrature applied to trust region computations,Numer.Algorithms,

34(2003),pp.85–102.

[5] D.C ALVETTI AND L.R EICHEL,Tikhonov regularization with a solution constraint,SIAM J.Sci.Comput.,

26(2004),pp.224–239.

[6]G.H.G OLUB AND G.M EURANT,Matrices,moments and quadrature,in Numerical Analysis1993,D.F.

Griffiths and G.A.Watson,eds.,Longman,Essex,England,1994,pp.105–156.

[7]G.H.G OLUB AND U.VON M ATT,Quadratically constrained least squares and quadratic problems,Numer.

Math.,59(1991),pp.561–580.

[8]M.H ANKE,J.N AGY,AND C.V OGEL,Quasi-Newton approach to nonnegative image restorations,Linear

Algebra Appl.,316(2000),pp.223–236.

[9]P.C.H ANSEN,Regularization tools:A Matlab package for analysis and solution of discrete ill-posed prob-

lems,Numer.Algorithms,6(1994),pp.1–35.Software is available in Netlib at

http://www.netlib.org.D.Calvetti,B.Lewis,L.Reichel,and F.Sgallari173

[10]J.G.N AGY,RestoreTools:An object oriented Matlab package for image restoration.Available at the web

site http://www.mathcs.emory.edu/˜nagy/.

[11]J.N AGY AND Z.S TRAKOS,Enforcing nonnegativity in image reconstruction algorithms,in Mathematical

Modeling,Estimation and Imaging,D.C.Wilson et al.,eds.,Society of Photo-Optical Instrumentation Engineers(SPIE),4121,The International Society for Optical Engineering,Bellingham,W A,2000,pp.

182–190.

[12] D.L.P HILLIPS,A technique for the numerical solution of certain integral equations of thefirst kind,J.ACM,

9(1962),pp.84–97.

[13]M.R OJAS,LSTRS Matlab software.Available at the web site

http://www.math.wfu.edu/˜mrojas/software.html.

[14]M.R OJAS AND D.C.S ORENSEN,A trust-region approach to regularization of large-scale discrete forms of

ill-posed problems,SIAM J.Sci.Comput.,23(2002),pp.1842–1860.

[15]M.R OJAS AND T.S TEIHAUG,An interior-point trust-region-based method for large-scale non-negative reg-

ularization,Inverse Problems,18(2002),pp.1291–1307.

文档

Tikhonov regularization with nonnegativity constra

ElectronicTransactionsonNumericalAnalysis.Volume18,pp.153-173,2004.Copyright©2004,KentStateUniversity.ISSN1068-9613.ETNAKentStateUniversityetna@mcs.kent.eduTIKHONOVREGULARIZATIONWITHNONNEGATIVITYCONSTRAINTD.CALVETTI,B.LEWIS,L.REICHEL,ANDF.SGALLARIAb
推荐度:
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top