Documentation Center

  • Trial Software
  • Product Updates

lsqnonlin

Solve nonlinear least-squares (nonlinear data-fitting) problems

Equation

Solves nonlinear least-squares curve fitting problems of the form

with optional lower and upper bounds lb and ub on the components of x.

x, lb, and ub can be vectors or matrices; see Matrix Arguments.

Syntax

x = lsqnonlin(fun,x0)
x = lsqnonlin(fun,x0,lb,ub)
x = lsqnonlin(fun,x0,lb,ub,options)
x = lsqnonlin(problem)
[x,resnorm] = lsqnonlin(...)
[x,resnorm,residual] = lsqnonlin(...)
[x,resnorm,residual,exitflag] = lsqnonlin(...)
[x,resnorm,residual,exitflag,output] = lsqnonlin(...)
[x,resnorm,residual,exitflag,output,lambda] = lsqnonlin(...)
[x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(...)

Description

lsqnonlin solves nonlinear least-squares problems, including nonlinear data-fitting problems.

Rather than compute the value (the sum of squares), lsqnonlin requires the user-defined function to compute the vector-valued function

Then, in vector terms, you can restate this optimization problem as

where x is a vector or matrix and f(x) is a function that returns a vector or matrix value. For details of matrix values, see Matrix Arguments.

x = lsqnonlin(fun,x0) starts at the point x0 and finds a minimum of the sum of squares of the functions described in fun. fun should return a vector of values and not the sum of squares of the values. (The algorithm implicitly computes the sum of squares of the components of fun(x).)

x = lsqnonlin(fun,x0,lb,ub) defines a set of lower and upper bounds on the design variables in x, so that the solution is always in the range lb  x  ub.

x = lsqnonlin(fun,x0,lb,ub,options) minimizes with the optimization options specified in options. Use optimoptions to set these options. Pass empty matrices for lb and ub if no bounds exist.

x = lsqnonlin(problem) finds the minimum for problem, where problem is a structure described in Input Arguments.

Create the problem structure by exporting a problem from Optimization app, as described in Exporting Your Work.

[x,resnorm] = lsqnonlin(...) returns the value of the squared 2-norm of the residual at x: sum(fun(x).^2).

[x,resnorm,residual] = lsqnonlin(...) returns the value of the residual fun(x) at the solution x.

[x,resnorm,residual,exitflag] = lsqnonlin(...) returns a value exitflag that describes the exit condition.

[x,resnorm,residual,exitflag,output] = lsqnonlin(...) returns a structure output that contains information about the optimization.

[x,resnorm,residual,exitflag,output,lambda] = lsqnonlin(...) returns a structure lambda whose fields contain the Lagrange multipliers at the solution x.

[x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(...) returns the Jacobian of fun at the solution x.

    Note:   If the specified input bounds for a problem are inconsistent, the output x is x0 and the outputs resnorm and residual are [].

    Components of x0 that violate the bounds lb ≤ x ≤ ub are reset to the interior of the box defined by the bounds. Components that respect the bounds are not changed.

Input Arguments

Function Arguments contains general descriptions of arguments passed into lsqnonlin. This section provides function-specific details for fun, options, and problem:

fun

The function whose sum of squares is minimized. fun is a function that accepts a vector x and returns a vector F, the objective functions evaluated at x. The function fun can be specified as a function handle to a file:

x = lsqnonlin(@myfun,x0)

where myfun is a MATLAB® function such as

function F = myfun(x)
F = ...            % Compute function values at x

fun can also be a function handle for an anonymous function.

x = lsqnonlin(@(x)sin(x.*x),x0);

If the user-defined values for x and F are matrices, they are converted to a vector using linear indexing.

    Note   The sum of squares should not be formed explicitly. Instead, your function should return a vector of function values. See Examples.

If the Jacobian can also be computed and the Jacobian option is 'on', set by

options = optimoptions('lsqnonlin','Jacobian','on')

the function fun must return, in a second output argument, the Jacobian value J, a matrix, at x. By checking the value of nargout, the function can avoid computing J when fun is called with only one output argument (in the case where the optimization algorithm only needs the value of F but not J).

function [F,J] = myfun(x)
F = ...          % Objective function values at x
if nargout > 1   % Two output arguments
   J = ...   % Jacobian of the function evaluated at x
end

If fun returns a vector (matrix) of m components and x has length n, where n is the length of x0, the Jacobian J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (The Jacobian J is the transpose of the gradient of F.)

options

Options provides the function-specific details for the options values.

problem

objective

Objective function

x0

Initial point for x
lbVector of lower bounds
ubVector of upper bounds

solver

'lsqnonlin'

options

Options created with optimoptions

Output Arguments

Function Arguments contains general descriptions of arguments returned by lsqnonlin. This section provides function-specific details for exitflag, lambda, and output:

exitflag

Integer identifying the reason the algorithm terminated. The following lists the values of exitflag and the corresponding reasons the algorithm terminated:

1

Function converged to a solution x.

2

Change in x was less than the specified tolerance.

3

Change in the residual was less than the specified tolerance.

4

Magnitude of search direction was smaller than the specified tolerance.

0

Number of iterations exceeded options.MaxIter or number of function evaluations exceeded options.MaxFunEvals.

-1

Output function terminated the algorithm.

-2

Problem is infeasible: the bounds lb and ub are inconsistent.

-4

Line search could not sufficiently decrease the residual along the current search direction.

lambda

Structure containing the Lagrange multipliers at the solution x (separated by constraint type). The fields are

lower

Lower bounds lb

upper

Upper bounds ub

output

Structure containing information about the optimization. The fields of the structure are

firstorderopt

Measure of first-order optimality (trust-region-reflective algorithm, [ ] for others)

iterations

Number of iterations taken

funcCount

The number of function evaluations

cgiterations

Total number of PCG iterations (trust-region-reflective algorithm, [ ] for others)

stepsize

Final displacement in x (Levenberg-Marquardt algorithm)

algorithm

Optimization algorithm used

message

Exit message

Options

Optimization options. Set or change options using the optimoptions function. Some options apply to all algorithms, some are only relevant when you are using the trust-region-reflective algorithm, and others are only relevant when you are using the Levenberg-Marquardt algorithm. See Optimization Options Reference for detailed information.

Algorithm Options

Both algorithms use the following options:

Algorithm

Choose between 'trust-region-reflective' (default) and 'levenberg-marquardt'. Set the initial Levenberg-Marquardt parameter λ by setting Algorithm to a cell array such as {'levenberg-marquardt',.005}. The default λ = 0.01.

The Algorithm option specifies a preference for which algorithm to use. It is only a preference, because certain conditions must be met to use each algorithm. For the trust-region-reflective algorithm, the nonlinear system of equations cannot be underdetermined; that is, the number of equations (the number of elements of F returned by fun) must be at least as many as the length of x. The Levenberg-Marquardt algorithm does not handle bound constraints. For more information on choosing the algorithm, see Choosing the Algorithm.

DerivativeCheck

Compare user-supplied derivatives (gradients of objective or constraints) to finite-differencing derivatives. The choices are 'on' or the default 'off'.

Diagnostics

Display diagnostic information about the function to be minimized or solved. The choices are 'on' or the default 'off'.

DiffMaxChange

Maximum change in variables for finite-difference gradients (a positive scalar). The default is Inf.

DiffMinChange

Minimum change in variables for finite-difference gradients (a positive scalar). The default is 0.

Display

Level of display:

  • 'off' or 'none' displays no output.

  • 'iter' displays output at each iteration, and gives the default exit message.

  • 'iter-detailed' displays output at each iteration, and gives the technical exit message.

  • 'final' (default) displays just the final output, and gives the default exit message.

  • 'final-detailed' displays just the final output, and gives the technical exit message.

FinDiffRelStep

Scalar or vector step size factor. When you set FinDiffRelStep to a vector v, forward finite differences delta are

delta = v.*sign(x).*max(abs(x),TypicalX);

and central finite differences are

delta = v.*max(abs(x),TypicalX);

Scalar FinDiffRelStep expands to a vector. The default is sqrt(eps) for forward finite differences, and eps^(1/3) for central finite differences.

FinDiffType

Finite differences, used to estimate gradients, are either 'forward' (default), or 'central' (centered). 'central' takes twice as many function evaluations, but should be more accurate.

The algorithm is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds.

FunValCheck

Check whether function values are valid. 'on' displays an error when the function returns a value that is complex, Inf, or NaN. The default 'off' displays no error.

Jacobian

If 'on', lsqnonlin uses a user-defined Jacobian (defined in fun), or Jacobian information (when using JacobMult), for the objective function. If 'off' (default), lsqnonlin approximates the Jacobian using finite differences.

MaxFunEvals

Maximum number of function evaluations allowed, a positive integer. The default is 100*numberOfVariables.

MaxIter

Maximum number of iterations allowed, a positive integer. The default is 400.

OutputFcn

Specify one or more user-defined functions that an optimization function calls at each iteration, either as a function handle or as a cell array of function handles. The default is none ([]). See Output Function.

PlotFcns

Plots various measures of progress while the algorithm executes, select from predefined plots or write your own. Pass a function handle or a cell array of function handles. The default is none ([]):

  • @optimplotx plots the current point.

  • @optimplotfunccount plots the function count.

  • @optimplotfval plots the function value.

  • @optimplotresnorm plots the norm of the residuals.

  • @optimplotstepsize plots the step size.

  • @optimplotfirstorderopt plots the first-order optimality measure.

For information on writing a custom plot function, see Plot Functions.

TolFun

Termination tolerance on the function value, a positive scalar. The default is 1e-6.

TolX

Termination tolerance on x, a positive scalar. The default is 1e-6.

TypicalX

Typical x values. The number of elements in TypicalX is equal to the number of elements in x0, the starting point. The default value is ones(numberofvariables,1). lsqnonlin uses TypicalX for scaling finite differences for gradient estimation.

Trust-Region-Reflective Algorithm Only

The trust-region-reflective algorithm uses the following options:

JacobMult

Function handle for Jacobian multiply function. For large-scale structured problems, this function computes the Jacobian matrix product J*Y, J'*Y, or J'*(J*Y) without actually forming J. The function is of the form

W = jmfun(Jinfo,Y,flag) 

where Jinfo contains the matrix used to compute J*Y (or J'*Y, or J'*(J*Y)). The first argument Jinfo must be the same as the second argument returned by the objective function fun, for example, by

[F,Jinfo] = fun(x)

Y is a matrix that has the same number of rows as there are dimensions in the problem. flag determines which product to compute:

  • If flag == 0 then W = J'*(J*Y).

  • If flag > 0 then W = J*Y.

  • If flag < 0 then W = J'*Y.

In each case, J is not formed explicitly. lsqnonlin uses Jinfo to compute the preconditioner. See Passing Extra Parameters for information on how to supply values for any additional parameters jmfun needs.

    Note   'Jacobian' must be set to 'on' for lsqnonlin to pass Jinfo from fun to jmfun.

See Minimization with Dense Structured Hessian, Linear Equalities and Jacobian Multiply Function with Linear Least Squares for similar examples.

 
 

JacobPattern

Sparsity pattern of the Jacobian for finite differencing. Set JacobPattern(i,j) = 1 when fun(i) depends on x(j). Otherwise, set JacobPattern(i,j) = 0. In other words, JacobPattern(i,j) = 1 when you can have ∂fun(i)/∂x(j) ≠ 0.

Use JacobPattern when it is inconvenient to compute the Jacobian matrix J in fun, though you can determine (say, by inspection) when fun(i) depends on x(j). lsqnonlin can approximate J via sparse finite differences when you give JacobPattern.

In the worst case, if the structure is unknown, do not set JacobPattern. The default behavior is as if JacobPattern is a dense matrix of ones. Then lsqnonlin computes a full finite-difference approximation in each iteration. This can be very expensive for large problems, so it is usually better to determine the sparsity structure.

 

MaxPCGIter

Maximum number of PCG (preconditioned conjugate gradient) iterations, a positive scalar. The default is max(1, numberOfVariables/2)). For more information, see Algorithms.

 

PrecondBandWidth

Upper bandwidth of preconditioner for PCG, a nonnegative integer. The default PrecondBandWidth is Inf, which means a direct factorization (Cholesky) is used rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step towards the solution. Set PrecondBandWidth to 0 for diagonal preconditioning (upper bandwidth of 0). For some problems, an intermediate bandwidth reduces the number of PCG iterations.

 

TolPCG

Termination tolerance on the PCG iteration, a positive scalar. The default is 0.1.

 

Levenberg-Marquardt Algorithm Only

The Levenberg-Marquardt algorithm uses the following options:

ScaleProblem

'Jacobian' can sometimes improve the convergence of a poorly scaled problem; the default is 'none'.

Examples

Find x that minimizes

starting at the point x = [0.3, 0.4].

Because lsqnonlin assumes that the sum of squares is not explicitly formed in the user-defined function, the function passed to lsqnonlin should instead compute the vector-valued function

for k = 1 to 10 (that is, F should have 10 components).

First, write a file to compute the 10-component vector F.

function F = myfun(x)
k = 1:10;
F = 2 + 2*k-exp(k*x(1))-exp(k*x(2));

Next, invoke an optimization routine.

x0 = [0.3 0.4]                        % Starting guess
[x,resnorm] = lsqnonlin(@myfun,x0);   % Invoke optimizer

After about 24 function evaluations, this example gives the solution

x,resnorm
x = 
     0.2578   0.2578

resnorm = 
     124.3622

Diagnostics

Trust-Region-Reflective Optimization

The trust-region-reflective method does not allow equal upper and lower bounds. For example, if lb(2)==ub(2), lsqlin gives the error

Equal upper and lower bounds not permitted.

(lsqnonlin does not handle equality constraints, which is another way to formulate equal bounds. If equality constraints are present, use fmincon, fminimax, or fgoalattain for alternative formulations where equality constraints can be included.)

Limitations

The function to be minimized must be continuous. lsqnonlin might only give local solutions.

lsqnonlin can solve complex-valued problems directly with the levenberg-marquardt algorithm. However, this algorithm does not accept bound constraints. For a complex problem with bound constraints, split the variables into real and imaginary parts, and use the trust-region-reflective algorithm. See Fit a Model to Complex-Valued Data.

Trust-Region-Reflective Optimization

The trust-region-reflective algorithm for lsqnonlin does not solve underdetermined systems; it requires that the number of equations, i.e., the row dimension of F, be at least as great as the number of variables. In the underdetermined case, the Levenberg-Marquardt algorithm is used instead.

The preconditioner computation used in the preconditioned conjugate gradient part of the trust-region-reflective method forms JTJ (where J is the Jacobian matrix) before computing the preconditioner; therefore, a row of J with many nonzeros, which results in a nearly dense product JTJ, can lead to a costly solution process for large problems.

If components of x have no upper (or lower) bounds, lsqnonlin prefers that the corresponding components of ub (or lb) be set to inf (or -inf for lower bounds) as opposed to an arbitrary but very large positive (or negative for lower bounds) number.

Trust-Region-Reflective Problem Coverage and Requirements

For Large Problems
  • Provide sparsity structure of the Jacobian or compute the Jacobian in fun.

  • The Jacobian should be sparse.

Levenberg-Marquardt Optimization

The Levenberg-Marquardt algorithm does not handle bound constraints.

Since the trust-region-reflective algorithm does not handle underdetermined systems and the Levenberg-Marquardt does not handle bound constraints, problems with both these characteristics cannot be solved by lsqnonlin.

More About

expand all

Algorithms

Trust-Region-Reflective Optimization

By default, lsqnonlin chooses the trust-region-reflective algorithm. This algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region-Reflective Least Squares, and in particular Large Scale Nonlinear Least Squares.

Levenberg-Marquardt Optimization

If you set the Algorithm option to 'levenberg-marquardt' using optimoptions, lsqnonlin uses the Levenberg-Marquardt method [4], [5], and [6]. See Levenberg-Marquardt Method.

References

[1] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418–445, 1996.

[2] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189-224, 1994.

[3] Dennis, J.E., Jr., "Nonlinear Least-Squares," State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269–312, 1977.

[4] Levenberg, K., "A Method for the Solution of Certain Problems in Least-Squares," Quarterly Applied Math. 2, pp. 164–168, 1944.

[5] Marquardt, D., "An Algorithm for Least-Squares Estimation of Nonlinear Parameters," SIAM Journal Applied Math., Vol. 11, pp. 431–441, 1963.

[6] Moré, J.J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105–116, 1977.

See Also

| | |

Was this topic helpful?