Optimization Toolbox solvers for nonlinear problems use gradient-based methods for minimizing or maximizing an objective. Information about the gradient of the objective function can be either estimated by the solver using finite differences, or supplied to the solver by the user.
Optimization Toolbox can be used with Parallel Computing Toolbox to solve problems that benefit from parallel computation. You can use parallel computing to decrease time to solution by enabling built-in parallel computing support or by defining a custom parallel computing implementation of an optimization problem.
Built-in support for parallel computing in Optimization Toolbox enables you to accelerate the gradient estimation step in select solvers for constrained nonlinear optimization problems and for multiobjective goal attainment and minimax problems.
You can customize a parallel computing implementation by explicitly defining the optimization problem to use parallel computing functionality. You can define either an objective function or a constraint function to use parallel computing, enabling you to decrease the time required to evaluate the objective or constraint.
Speeding Up Optimization Problems Using Parallel Computing
In this webinar, we will use two case studies to demonstrate how you can use parallel computing to speed up single-level and multilevel optimization problems in MATLAB.
Optimization Toolbox solvers minimize nonlinear functions by estimating the partial derivatives of the objective function using finite differences. Alternatively, you can define functions that calculate the values of the partial derivatives, significantly reducing the overhead of the derivative estimation step.
Calculating partial derivatives of an objective function can be a tedious task. By expressing the problem symbolically using Symbolic Math Toolbox™, you can use built-in functions for automatically calculating objective function partial derivatives. MATLAB code can then be generated for use with Optimization Toolbox solvers.