Home > manopt > solvers > linesearch > linesearch_adaptive.m

linesearch_adaptive

PURPOSE ^

Adaptive line search algorithm (step size selection) for descent methods.

SYNOPSIS ^

function [stepsize, newx, newkey, lsstats] =linesearch_adaptive(problem, x, d, f0, df0, options, storedb, key)

DESCRIPTION ^

 Adaptive line search algorithm (step size selection) for descent methods.

 function [stepsize, newx, newkey, lsstats] = 
        linesearch_adaptive(problem, x, d, f0, df0, options, storedb, key)

 Adaptive linesearch algorithm for descent methods, based on a simple
 backtracking method. Contrary to linesearch.m, this function is not
 invariant under rescaling of the search direction d. These two line
 search methods vary mainly in their strategy to pick the initial step
 size.
 
 Below, the step is constructed as alpha*d, and the step size is the norm
 of that vector, thus: stepsize = alpha*norm_d. The step is executed by
 retracting the vector alpha*d from the current point x, giving newx.

 This line-search may create and maintain a structure called lsmem inside
 storedb.internal. This gives the linesearch the opportunity to remember
 what happened in the previous calls. This is typically used to make a
 first guess at the step size, based on previous events.

 Inputs/Outputs : see help for linesearch

 See also: steepestdescent conjugategradients linesearch

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 function [stepsize, newx, newkey, lsstats] = ...
0002   linesearch_adaptive(problem, x, d, f0, df0, options, storedb, key)
0003 % Adaptive line search algorithm (step size selection) for descent methods.
0004 %
0005 % function [stepsize, newx, newkey, lsstats] =
0006 %        linesearch_adaptive(problem, x, d, f0, df0, options, storedb, key)
0007 %
0008 % Adaptive linesearch algorithm for descent methods, based on a simple
0009 % backtracking method. Contrary to linesearch.m, this function is not
0010 % invariant under rescaling of the search direction d. These two line
0011 % search methods vary mainly in their strategy to pick the initial step
0012 % size.
0013 %
0014 % Below, the step is constructed as alpha*d, and the step size is the norm
0015 % of that vector, thus: stepsize = alpha*norm_d. The step is executed by
0016 % retracting the vector alpha*d from the current point x, giving newx.
0017 %
0018 % This line-search may create and maintain a structure called lsmem inside
0019 % storedb.internal. This gives the linesearch the opportunity to remember
0020 % what happened in the previous calls. This is typically used to make a
0021 % first guess at the step size, based on previous events.
0022 %
0023 % Inputs/Outputs : see help for linesearch
0024 %
0025 % See also: steepestdescent conjugategradients linesearch
0026 
0027 % This file is part of Manopt: www.manopt.org.
0028 % Original author: Bamdev Mishra, Dec. 30, 2012.
0029 % Contributors: Nicolas Boumal
0030 % Change log:
0031 %
0032 %   Sept. 13, 2013 (NB) :
0033 %       The automatic direction reversal feature was removed (it triggered
0034 %       when df0 > 0). Direction reversal is a decision that needs to be
0035 %       made by the solver, so it can know about it.
0036 %
0037 %    Nov. 7, 2013 (NB) :
0038 %       The whole function has been recoded to mimick more closely the new
0039 %       version of linesearch.m. The parameters are available through the
0040 %       options structure passed to the solver and have the same names and
0041 %       same meaning as for the base linesearch. The information is logged
0042 %       more reliably.
0043 %
0044 %   April 3, 2015 (NB):
0045 %       Works with the new StoreDB class system.
0046 %
0047 %   April 8, 2015 (NB):
0048 %       Got rid of lsmem input/output: now maintained in storedb.internal.
0049 
0050 
0051     % Allow omission of the key, and even of storedb.
0052     if ~exist('key', 'var')
0053         if ~exist('storedb', 'var')
0054             storedb = StoreDB();
0055         end
0056         key = storedb.getNewKey();
0057     end
0058 
0059     % Backtracking default parameters. These can be overwritten in the
0060     % options structure which is passed to the solver.
0061     default_options.ls_contraction_factor = .5;
0062     default_options.ls_suff_decr = .5;
0063     default_options.ls_max_steps = 10;
0064     default_options.ls_initial_stepsize = 1;
0065     
0066     if ~exist('options', 'var') || isempty(options)
0067         options = struct();
0068     end
0069     options = mergeOptions(default_options, options);
0070     
0071     contraction_factor = options.ls_contraction_factor;
0072     suff_decr = options.ls_suff_decr;
0073     max_ls_steps = options.ls_max_steps;
0074     initial_stepsize = options.ls_initial_stepsize;
0075     
0076     % Compute the norm of the search direction.
0077     norm_d = problem.M.norm(x, d);
0078     
0079     % If this is not the first iteration, then lsmem should have been
0080     % filled with a suggestion for the initial step.
0081     if isfield(storedb.internal, 'lsmem')
0082         lsmem = storedb.internal.lsmem;
0083         if isfield(lsmem, 'init_alpha')
0084             % Pick initial step size based on where we were last time,
0085             alpha = lsmem.init_alpha;
0086         end
0087     % Otherwise, fall back to a user supplied suggestion.
0088     else
0089         alpha = initial_stepsize / norm_d;
0090     end
0091 
0092     % Make the chosen step and compute the cost there.
0093     newx = problem.M.retr(x, d, alpha);
0094     newkey = storedb.getNewKey();
0095     newf = getCost(problem, newx, storedb, newkey);
0096     cost_evaluations = 1;
0097     
0098     % Backtrack while the Armijo criterion is not satisfied
0099     while newf > f0 + suff_decr*alpha*df0
0100         
0101         % Reduce the step size,
0102         alpha = contraction_factor * alpha;
0103         
0104         % and look closer down the line
0105         newx = problem.M.retr(x, d, alpha);
0106         newkey = storedb.getNewKey();
0107         newf = getCost(problem, newx, storedb, newkey);
0108         cost_evaluations = cost_evaluations + 1;
0109         
0110         % Make sure we don't run out of budget
0111         if cost_evaluations >= max_ls_steps
0112             break;
0113         end
0114         
0115     end
0116     
0117     % If we got here without obtaining a decrease, we reject the step.
0118     if newf > f0
0119         alpha = 0;
0120         newx = x;
0121         newkey = key;
0122         newf = f0; %#ok<NASGU>
0123     end
0124     
0125     % As seen outside this function, stepsize is the size of the vector we
0126     % retract to make the step from x to newx. Since the step is alpha*d:
0127     stepsize = alpha * norm_d;
0128 
0129     % Fill lsmem with a suggestion for what the next initial step size
0130     % trial should be. On average we intend to do only one extra cost
0131     % evaluation. Notice how the suggestion is not about stepsize but about
0132     % alpha. This is the reason why this line search is not invariant under
0133     % rescaling of the search direction d.
0134     switch cost_evaluations
0135         case 1
0136             % If things go very well, push your luck.
0137             init_alpha = 2 * alpha;
0138         case 2
0139             % If things go reasonably well, try to keep pace.
0140             init_alpha = alpha;
0141         otherwise
0142             % If we backtracked a lot, the new stepsize is probably quite
0143             % small: try to recover.
0144             init_alpha = 2 * alpha;
0145     end
0146     storedb.internal.lsmem.init_alpha = init_alpha;
0147     
0148     % Return some statistics also, for possible analysis.
0149     lsstats.costevals = cost_evaluations;
0150     lsstats.stepsize = stepsize;
0151     lsstats.alpha = alpha;
0152     
0153 end

Generated on Fri 08-Sep-2017 12:43:19 by m2html © 2005