1、Assume that you have a guess U(n) of the solution. If U(n) is close enough to the exact solution, an improved approximation U(n + 1) is obtained by solving the linearized problem where is a positive number. (It is not necessary that have a solution.even if has. In this case, the Gauss-Newton iterati
2、on tends to be the minimizer of the residual, i.e., the solution of minU It is well known that for sufficiently small And is called a descent direction for , where | is the l2-norm. The iteration is where is chosen as large as possible such that the step has a reasonable descent.The Gauss-Newton met
3、hod is local, and convergence is assured only when U(0)is close enough to the solution. In general, the first guess may be outside thergion of convergence. To improve convergence from bad initial guesses, adamping strategy is implemented for choosing , the Armijo-Goldstein line search. It chooses th
4、e largest damping coefficient out of the sequence 1, 1/2,1/4, . . . such that the following inequality holds: | which guarantees a reduction of the residual norm by at least Note that each step of the line-search algorithm requires an evaluation of the residual An important point of this strategy is
5、 that when U(n) approaches the solution, then and thus the convergence rate increases. If there is a solution to the scheme ultimately recovers the quadratic convergence rate of the standard Newton iteration. Closely related to the above problem is the choice of the initial guess U(0). By default, t
6、he solver sets U(0) and then assembles the FEM matrices K and F and computes The damped Gauss-Newton iteration is then started with U(1), which should be a better guess than U(0). If the boundary conditions do not depend on the solution u, then U(1) satisfies them even if U(0) does not. Furthermore,
7、 if the equation is linear, then U(1) is the exact FEM solution and the solver does not enter the Gauss-Newton loop. There are situations where U(0) = 0 makes no sense or convergence is impossible. In some situations you may already have a good approximation and the nonlinear solver can be started w
8、ith it, avoiding the slow convergence regime.This idea is used in the adaptive mesh generator. It computes a solution on a mesh, evaluates the error, and may refine certain triangles. The interpolant of is a very good starting guess for the solution on the refined mesh. In general the exact Jacobian
9、 is not available. Approximation of Jn by finite differences in the following way is expensive but feasible. The ith column of Jn can be approximated by which implies the assembling of the FEM matrices for the triangles containing grid point i. A very simple approximation to Jn, which gives a fixed
10、point iteration, is also possible as follows. Essentially, for a given U(n), compute the FEM matrices K and F and set Nonlinear Equations This is equivalent to approximating the Jacobian with the stiffness matrix. Indeed, since putting Jn = K yields In many cases the convergence rate is slow, but th
11、e cost of each iteration is cheap. The nonlinear solver implemented in the PDE Toolbox also provides for a compromise between the two extremes. To compute the derivative of the mapping , proceed as follows. The a term has been omitted for clarity, but appears again in the final result below.The firs
12、t integral term is nothing more than Ki,j.The second term is “lumped,” i.e., replaced by a diagonal matrix that contains the row sums. Since j j = 1, the second term is approximated bywhich is the ithcomponent of K(c)U, where K(c) is the stiffness matrixassociated with the coefficient rather than c.
13、 The same reasoning can beapplied to the derivative of the mapping . Finally note that thederivative of the mapping is exactly which is the mass matrix associated with the coefficient . Thus the Jacobian of the residual ( U) is approximated by where the differentiation is with respect to u. K and M
14、designate stiffness and mass matrices and their indices designate the coefficients with respect to which they are assembled. At each Gauss-Newton iteration, the nonlinear solver assembles the matrices corresponding to the equations and then produces the approximate Jacobian. The differentiations of
15、the coefficients are done numerically. In the general setting of elliptic systems, the boundary conditions are appended to the stiffness matrix to form the full linear system: where the coefficients of and may depend on the solution . The “lumped” approach approximates the derivative mapping of the
16、residual by The nonlinearities of the boundary conditions and the dependencies of the coefficients on the derivatives of are not properly linearized by this scheme. When such nonlinearities are strong, the scheme reduces to the fix-pointiter ation and may converge slowly or not at all. When the boun
17、dary condition sare linear, they do not affect the convergence properties of the iteration schemes. In the Neumann case they are invisible (H is an empty matrix) and in the Dirichlet case they merely state that the residual is zero on the corresponding boundary points. Adaptive Mesh Refinement The t
18、oolbox has a function for global, uniform mesh refinement. It divides each triangle into four similar triangles by creating new corners at the midsides, adjusting for curved boundaries. You can assess the accuracy of the numerical solution by comparing results from a sequence of successively refined
19、 meshes. If the solution is smooth enough, more accurate results may be obtained by extra polation. The solutions of the toolbox equation often have geometric features like localized strong gradients. An example of engineering importance in elasticity is the stress concentration occurring at reentrant corners such as the MATLAB favorite, the L-shaped membrane. Then it is more economical to refine the mesh selectively, i.e., only where it is needed. When the selection is based ones timates of errors in the computed solutions, a posteriori estimates, we speak of adaptive mesh refinement. See