Optimization Toolbox 

This is a demonstration of how to speedup the minimization of an expensive optimization problem using functions in Optimization Toolbox™ and Global Optimization Toolbox. In the first part of the demo we will solve the optimization problem by evaluating functions in a serial fashion and in the second part of the demo we will solve the same problem using the parallel for loop (PARFOR) feature by evaluating functions in parallel. We will compare the time taken by the optimization function in both cases.
For the purpose of this demo, we solve a problem in four variables, where the objective and constraint functions are made artificially expensive by having them compute the eigenvalues of a large matrix.
type expensive_objfun.m type expensive_confun.m
function f = expensive_objfun(x) %EXPENSIVE_OBJFUN An expensive objective function used in optimparfor demo. % Copyright 2007 The MathWorks, Inc. % $Revision: 1.1.6.1 $ $Date: 2007/12/10 21:51:04 $ % Simulate an expensive function by performing an expensive computation eig(magic(300)); % Evaluate objective function f = exp(x(1)) * (4*x(3)^2 + 2*x(4)^2 + 4*x(1)*x(2) + 2*x(2) + 1); function [c,ceq] = expensive_confun(x) %EXPENSIVE_CONFUN An expensive constraint function used in optimparfor demo. % Copyright 2007 The MathWorks, Inc. % $Revision: 1.1.6.1 $ $Date: 2007/12/10 21:51:03 $ % Simulate an expensive function by performing an expensive computation eig(magic(450)); % Evaluate constraints c = [1.5 + x(1)*x(2)*x(3)  x(1)  x(2)  x(4); x(1)*x(2) + x(4)  10]; % No nonlinear equality constraints: ceq = [];
We can measure the approximate time taken by the objective function and the constraint function to evaluate a point. This can give us an estimate of total time taken to minimize the problem.
startTime = tic; for i = 1:10 expensive_objfun(rand(1,4)); end stopTime = toc(startTime); averageTime = stopTime/10; fprintf('Objective function evaluation: %g seconds.\n', averageTime); startTime = tic; for i = 1:10 expensive_confun(rand(1,4)); end stopTime = toc(startTime); averageTime = stopTime/10; fprintf('Constraint function evaluation: %g seconds.\n',averageTime);
Objective function evaluation: 0.199702 seconds. Constraint function evaluation: 0.190456 seconds.
We are interested in measuring the time taken by FMINCON so that we can compare it to the parallel FMINCON evaluation.
startPoint = [1 2 0 5]; options = optimset('Display','iter','Algorithm','activeset'); startTime = tic; fmincon(@expensive_objfun,startPoint,[],[],[],[],[],[],@expensive_confun,options ); time_fmincon_sequential = toc(startTime); fprintf('Serial FMINCON optimization takes %g seconds.\n',time_fmincon_sequentia l);
Max Line search Directional Firstorder Iter Fcount f(x) constraint steplength derivative optimality Pr ocedure 0 5 106.013 2.5 1 16 40.1132 1.131 0.0156 83.7 24.6 2 23 23.3775 2.136 0.25 12.3 20.5 3 31 14.3993 1.307 0.125 9.59 25.8 4 36 6.2782 0.9803 1 6.95 5.87 5 41 0.412294 0.4266 1 2.56 3.7 6 46 1.71467 0.06477 1 1.92 2.62 7 52 3.0813 2.682 0.5 1.83 9.34 8 57 5.97905 0.1211 1 2.82 5.75 9 62 6.95348 0.01616 1 1.77 0.639 10 67 7.05833 0.02524 1 0.375 0.852 11 72 7.08266 0.0003593 1 0.267 0.426 12 77 7.0948 6.967e05 1 0.235 0.509 13 82 7.12753 0.002446 1 0.296 0.344 14 87 7.15265 0.009521 1 0.161 0.0809 15 92 7.16982 0.01013 1 0.107 0.293 16 97 7.17988 0.0005696 1 0.107 0.154 17 102 7.18041 2.954e06 1 0.0535 0.00981 18 107 7.18041 3.2e08 1 0.00416 0.000112 H essian modified Local minimum possible. Constraints satisfied. fmincon stopped because the predicted change in the objective function is less than the default value of the function tolerance and constraints were satisfied to within the default value of the constraint tolerance. Active inequalities (to within options.TolCon = 1e06): lower upper ineqlin ineqnonlin 2 Serial FMINCON optimization takes 34.1886 seconds.
Since GA usually takes more function evaluations than FMINCON we will remove the expensive constraint from this problem and perform unconstrained optimization instead; we pass empty ([]) for constraints. In addition, we limit the maximum generations to 15 for GA so that GA can terminate in a reasonable amount of time. We are interested in measuring the time taken by GA so that we can compare it to the parallel GA evaluation. Note that running GA requires Global Optimization Toolbox.
try gaAvailable = false; nvar = 4; gaoptions = gaoptimset('Generations',15,'Display','iter'); startTime = tic; ga(@expensive_objfun,nvar,[],[],[],[],[],[],[],gaoptions); time_ga_sequential = toc(startTime); fprintf('Serial GA optimization takes %g seconds.\n',time_ga_sequential); gaAvailable = true; catch ME warning('optimdemos:optimparfor:gaNotFound', ... ['GA function requires a license for Global Optimization Toolbox; ', ... 'skipping a call to the GA in this demo.']); end
Best Mean Stall Generation fcount f(x) f(x) Generations 1 40 1.469 23.18 0 2 60 1.236 20.34 0 3 80 0.655 7.149 0 4 100 0.9582 5.889 0 5 120 1.876 4.473 0 6 140 1.876 2.757 1 7 160 1.958 0.2561 0 8 180 2.387 2.363 0 9 200 5.238 3.562 0 10 220 23.18 2.033 0 11 240 44.76 1.239 0 12 260 44.76 12.72 1 13 280 44.76 23.8 2 14 300 64.49 34.12 0 15 320 64.49 42.11 1 Optimization terminated: maximum number of generations exceeded. Serial GA optimization takes 38.815 seconds.
The finite differencing used by the functions in Optimization Toolbox to approximate derivatives is done in parallel using the PARFOR feature if Parallel Computing Toolbox is available and MATLABPOOL is running. Similarly, GA, GAMULTIOBJ, and PATTERNSEARCH solvers in Global Optimization Toolbox evaluate functions in parallel. To use the PARFOR feature, we can use the MATLABPOOL function to setup the parallel environment. MATLABPOOL will start four MATLAB® workers on the local machine by default. The computer on which this demo is published has four cores so we will start only four workers. When you run this demo, if MATLABPOOL is already open we will use the available number of workers; see documentation for MATLABPOOL for more information.
needNewWorkers = (matlabpool('size') == 0); if needNewWorkers % Open a new MATLAB pool with 4 workers. matlabpool open 4 end
Starting matlabpool using the 'local' configuration ... connected to 4 labs.
To minimize our expensive optimization problem using the parallel FMINCON function, we need to explicitly indicate that our objective and constraint functions can be evaluated in parallel and that we want FMINCON to use its parallel functionality wherever possible. Currently, finite differencing can be done in parallel. We are interested in measuring the time taken by FMINCON so that we can compare it to the serial FMINCON run.
options = optimset(options,'UseParallel','always'); startTime = tic; fmincon(@expensive_objfun,startPoint,[],[],[],[],[],[],@expensive_confun,options ); time_fmincon_parallel = toc(startTime); fprintf('Parallel FMINCON optimization takes %g seconds.\n',time_fmincon_paralle l);
Max Line search Directional Firstorder Iter Fcount f(x) constraint steplength derivative optimality Pr ocedure 0 5 106.013 2.5 1 16 40.1132 1.131 0.0156 83.7 24.6 2 23 23.3775 2.136 0.25 12.3 20.5 3 31 14.3993 1.307 0.125 9.59 25.8 4 36 6.2782 0.9803 1 6.95 5.87 5 41 0.412294 0.4266 1 2.56 3.7 6 46 1.71467 0.06477 1 1.92 2.62 7 52 3.0813 2.682 0.5 1.83 9.34 8 57 5.97905 0.1211 1 2.82 5.75 9 62 6.95348 0.01616 1 1.77 0.639 10 67 7.05833 0.02524 1 0.375 0.852 11 72 7.08266 0.0003593 1 0.267 0.426 12 77 7.0948 6.967e05 1 0.235 0.509 13 82 7.12753 0.002446 1 0.296 0.344 14 87 7.15265 0.009521 1 0.161 0.0809 15 92 7.16982 0.01013 1 0.107 0.293 16 97 7.17988 0.0005696 1 0.107 0.154 17 102 7.18041 2.954e06 1 0.0535 0.00981 18 107 7.18041 3.2e08 1 0.00416 0.000112 H essian modified Local minimum possible. Constraints satisfied. fmincon stopped because the predicted change in the objective function is less than the default value of the function tolerance and constraints were satisfied to within the default value of the constraint tolerance. Active inequalities (to within options.TolCon = 1e06): lower upper ineqlin ineqnonlin 2 Parallel FMINCON optimization takes 26.1447 seconds.
To minimize our expensive optimization problem using the GA function, we need to explicitly indicate that our objective function can be evaluated in parallel and that we want GA to use its parallel functionality wherever possible. To use the parallel GA we also require that the 'Vectorized' option be set to the default (i.e., 'off'). We are again interested in measuring the time taken by GA so that we can compare it to the serial GA run. Though this run may be different from the serial one because GA uses a random number generator, the number of expensive function evaluations is the same in both runs. Note that running GA requires Global Optimization Toolbox.
if gaAvailable gaoptions = gaoptimset(gaoptions,'UseParallel','always'); startTime = tic; ga(@expensive_objfun,nvar,[],[],[],[],[],[],[],gaoptions); time_ga_parallel = toc(startTime); fprintf('Parallel GA optimization takes %g seconds.\n',time_ga_parallel); end
Best Mean Stall Generation fcount f(x) f(x) Generations 1 40 2.084 59.95 0 2 60 1.636 21.14 0 3 80 0.1087 17.08 0 4 100 0.4921 15.78 0 5 120 0.4921 19.7 1 6 140 192.4 7.161 0 7 160 416.4 66.16 0 8 180 416.4 98.58 1 9 200 416.4 157 2 10 220 680.2 287.2 0 11 240 1250 406 0 12 260 1250 586.4 1 13 280 1250 725.5 2 14 300 1298 923.3 0 15 320 1298 1099 1 Optimization terminated: maximum number of generations exceeded. Parallel GA optimization takes 22.9062 seconds.
Utilizing parallel function evaluation via PARFOR improved the efficiency of both FMINCON and GA. The improvement is typically better for expensive objective and constraint functions. Also, using more than two workers and/or using dedicated clusters with MATLABPOOL gives better performance. At last we close the parallel environment by calling MATLABPOOL.
if needNewWorkers matlabpool close end
Sending a stop signal to all the labs ... stopped.