Title of article :
Numerical optimization for the calculus of variations by gradients on non-Hilbert Sobolev spaces using conjugate gradients and normalized differential equations of steepest descent Original Research Article
Author/Authors :
Ivie Stein Jr.، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2009
Pages :
7
From page :
665
To page :
671
Abstract :
The purpose of this paper is to illustrate the application of numerical optimization methods for nonquadratic functionals defined on non-Hilbert Sobolev spaces. These methods use a gradient defined on a norm-reflexive and hence strictly convex normed linear space. This gradient is defined by Michael Golomb and Richard A. Tapia in [M. Golomb, R.A. Tapia, The metric gradient in normed linear spaces, Numer. Math. 20 (1972) 115–124]. It is also the same gradient described by Jean-Paul Penot in [J.P. Penot, On the convergence of descent algorithms, Comput. Optim. Appl. 23 (3) (2002) 279–284]. In this paper we shall restrict our attention to variational problems with zero boundary values. Nonzero boundary value problems can be converted to zero boundary value problems by an appropriate transformation of the dependent variables. The original functional changes by such a transformation. The connection to the calculus of variations is: The notion of a relative minimum for the Sobolev norm for pp positive and large and with only first derivatives and function values is related to the classical weak relative minimum in the calculus of variations. The motivation for minimizing nonquadratic functionals on these non-Hilbert Sobolev spaces is twofold. First, a norm equivalent to this Sobolev norm approaches the norm used for weak relative minimums in the calculus of variations as pp approaches infinity. Secondly, the Sobolev norm is both norm-reflexive and strictly convex so that the gradient for a non-Hilbert Sobolev space consists of a singleton set; hence, the gradient exists and is unique in this non-Hilbert normed linear space. Two gradient minimization methods are presented here. They are the conjugate gradient methods and an approach that uses differential equations of steepest descent. The Hilbert space conjugate gradient method of James Daniel in [J. Daniel, The Approximate Minimization of Functionals, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1971], is one conjugate gradient method extended to a conjugate gradient procedure for a non-Hilbert normed linear space. As a reference see Ivie Stein Jr., [I. Stein Jr., Conjugate gradient methods in Banach spaces, Nonlinear Anal. 63 (2005) e2621–e2628] where local convergence theorems are given. The approach using a differential equation of steepest descent is motivated and described by James Eells Jr. in [J. Eells Jr., A setting for global analysis, Bull. Amer. Math. Soc. 72 (1966) 751–807]. Also a normalized differential equation of steepest descent is used as a numerical minimization procedure in connection with starting methods such as higher order Runge–Kutta methods described by Baylis Shanks in [E. Baylis Shanks, Solutions of differential equations by evaluations of functions, Math. Comput. 20 (1966) 21–38], and higher order multi-step methods such as Adams–Bashforth described by Fred T. Krogh in [F.T. Krogh, Predictor-corrector methods of high order with improved stability characteristics, J. Assoc. Comput. Mach. 13 (1966) 374–385]. Efficiency in steepest descent is the goal here. By taking a larger step size with a higher order numerical method such as Adams–Bashforth, the differential equation of steepest descent approach turns out to be more efficient and accurate than the iterative method of steepest descent of the type used by Cauchy in 1847; Haskell B. Curry in The method of steepest descent for non-linear minimization problems, Quart. Appl. Math. 2 (1944) 258–261; and Richard H. Byrd and Richard A. Tapia, An extension of Curry’s theorem to steepest descent in normed linear spaces, Math. Programming 9 (2) (1975) 247–254. S.I. Al’ber and Ja. I. Al’ber in [S.I. Al’ber, Ja.I. Al’ber, Application of the method of differential descent to the solution of non-linear systems, Ž. Vyčisl. Mat. i Mat. Fiz. 7 (1967) 14–32 (in Russian)], and others have used the differential equation of steepest descent approach. Our numerical methods for solving initial value problems in differential equations are carried out in non-Hilbert function spaces. Examples are described for minimizing the arc length functional, minimizing surface area functionals in non-parametric form, and solving pendent and sessile drop problems including boundary conditions that are not rotationally symmetric. The pendent and sessile drop problems are similar to those problems considered by Henry C. Wente in [H.C. Wente, The symmetry of sessile and pendent drops, Pacific J. Math. 88 (2) (1980) 387–397], and in [H.C. Wente, The stability of the axially symmetric pendent drop, Pacific J. Math. 88 (2) (1980) 421–470], and by Robert Finn in [R. Finn, Equilibrium Capillary Surfaces, Springer-Verlag, New York, 1986]. In dealing with the problem of minimizing locally the sum of surface tension energy and potential energy due to gravity subject to a fixed volume constraint, one can apply Courant’s penalty method described in the Appendix of Lecture Notes by Richard Courant, [R. Courant, Calculus of variations and supplementary notes and exercises, 1945–1946, Revised and Amended by Jurgen Moser, Supplementary Notes by Martin Kruskal and Hanan Rubin, Mathematics, New York University, New York, 1956–1957]. The numerical minimization is carried out in non-Hilbert function spaces for the penalty or augmented function. The Lagrange multiplier can be computed here from Courant’s penalty method as described by Magnus R. Hestenes in [M.R. Hestenes, Optimization Theory—The Finite Dimensional Case, John Wiley & Sons, New York, 1975], on p. 307. Also the more numerically stable method of multipliers of Hestenes and Powell can be used to convert the constrained problem to an unconstrained problem. It is described on pp. 307–308 in the above reference of Hestenes [M.R. Hestenes, Optimization Theory — The Finite Dimensional Case, John Wiley & Sons, New York, 1975].
Keywords :
Conjugate gradients , Calculus of variations , Steepest descent , Numerical optimization
Journal title :
Nonlinear Analysis Theory, Methods & Applications
Serial Year :
2009
Journal title :
Nonlinear Analysis Theory, Methods & Applications
Record number :
861803
Link To Document :
بازگشت