Home > manopt > solvers > gradientapproximations > approxgradientFD.m

approxgradientFD

PURPOSE ^

Gradient approx. fnctn handle based on finite differences of the cost.

SYNOPSIS ^

function gradfun = approxgradientFD(problem, options)

DESCRIPTION ^

 Gradient approx. fnctn handle based on finite differences of the cost.

 function gradfun = approxgradientFD(problem)
 function gradfun = approxgradientFD(problem, options)

 Input:

 A Manopt problem structure (already containing the manifold and enough
 information to compute the cost) and an options structure (optional),
 containing one option:
    options.stepsize (positive double; default: 2^-23).
    options.subspacedim (positive integer; default: [], for M.dim()).

 If the cost cannot be computed on 'problem', a warning is issued.

 Output:
 
 Returns a function handle, encapsulating a generic finite difference
 approximation of the gradient of the problem cost. The finite difference
 is based on M.dim()+1 computations of the cost.
 
 The returned gradfun has this calling pattern:
 
   function gradfd = gradfun(x)
   function gradfd = gradfun(x, storedb)
   function gradfd = gradfun(x, storedb, key)
 
 x is a point on the manifold problem.M, storedb is a StoreDB object,
 and key is the StoreDB key to point x.

 Usage:

 Typically, the user will set problem.M and other fields to define the
 cost (typically, problem.cost). Then, to use this generic purpose
 gradient approximation:

   problem.approxgrad = approxgradientFD(problem, options);

 See also: steepestdescent conjugategradient

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function gradfun = approxgradientFD(problem, options)
0002 % Gradient approx. fnctn handle based on finite differences of the cost.
0003 %
0004 % function gradfun = approxgradientFD(problem)
0005 % function gradfun = approxgradientFD(problem, options)
0006 %
0007 % Input:
0008 %
0009 % A Manopt problem structure (already containing the manifold and enough
0010 % information to compute the cost) and an options structure (optional),
0011 % containing one option:
0012 %    options.stepsize (positive double; default: 2^-23).
0013 %    options.subspacedim (positive integer; default: [], for M.dim()).
0014 %
0015 % If the cost cannot be computed on 'problem', a warning is issued.
0016 %
0017 % Output:
0018 %
0019 % Returns a function handle, encapsulating a generic finite difference
0020 % approximation of the gradient of the problem cost. The finite difference
0021 % is based on M.dim()+1 computations of the cost.
0022 %
0023 % The returned gradfun has this calling pattern:
0024 %
0025 %   function gradfd = gradfun(x)
0026 %   function gradfd = gradfun(x, storedb)
0027 %   function gradfd = gradfun(x, storedb, key)
0028 %
0029 % x is a point on the manifold problem.M, storedb is a StoreDB object,
0030 % and key is the StoreDB key to point x.
0031 %
0032 % Usage:
0033 %
0034 % Typically, the user will set problem.M and other fields to define the
0035 % cost (typically, problem.cost). Then, to use this generic purpose
0036 % gradient approximation:
0037 %
0038 %   problem.approxgrad = approxgradientFD(problem, options);
0039 %
0040 % See also: steepestdescent conjugategradient
0041 
0042 % This file is part of Manopt: www.manopt.org.
0043 % Original author: Nicolas Boumal, Nov. 1, 2016.
0044 % Contributors:
0045 % Change log:
0046 
0047     % This gradient approximation is based on the cost:
0048     % check availability.
0049     if ~canGetCost(problem)
0050         warning('manopt:approxgradFD:nocost', ...
0051                 'approxgradFD requires the cost to be computable.');
0052     end
0053 
0054     % Set local defaults here, and merge with user options, if any.
0055     localdefaults.stepsize = 2^-23;
0056     localdefaults.subspacedim = [];
0057     if ~exist('options', 'var') || isempty(options)
0058         options = struct();
0059     end
0060     options = mergeOptions(localdefaults, options);
0061     
0062     % % Finite-difference parameters
0063     % How far do we look?
0064     stepsize = options.stepsize;
0065     % Approximate the projection of the gradient on a random subspace of
0066     % what dimension? If [], uses full tangent space.
0067     subspacedim = options.subspacedim;
0068                    
0069     % Build and return the function handle here. This extra construct via
0070     % funhandle makes it possible to make storedb and key optional.
0071     gradfun = @funhandle;
0072     function gradfd = funhandle(x, storedb, key)
0073         % Allow omission of the key, and even of storedb.
0074         if ~exist('key', 'var')
0075             if ~exist('storedb', 'var')
0076                 storedb = StoreDB();
0077             end
0078             key = storedb.getNewKey();
0079         end
0080         gradfd = gradientFD(stepsize, subspacedim, problem, x, storedb, key);
0081     end
0082     
0083 end
0084 
0085 
0086 function gradfd = gradientFD(stepsize, subspacedim, problem, x, storedb, key)
0087 % This function does the actual work.
0088 %
0089 % Original code: Nov. 1, 2016 (NB).
0090     
0091     % Evaluate the cost at the root point
0092     fx = getCost(problem, x, storedb, key);
0093 
0094     % Pick an orthonormal basis for the tangent space at x, or a subspace
0095     % thereof. The default is a full subspace. If a strict subspace is
0096     % picked, the returned vector approximates the orthogonal projection of
0097     % the gradient to that subspace.
0098     B = tangentorthobasis(problem.M, x, subspacedim);
0099     
0100     % Use finite differences to approximate the directional derivative
0101     % along each direction in the basis B.
0102     df = zeros(size(B));
0103     for k = 1 : numel(B)
0104         % Move in the B{k} direction
0105         xk = problem.M.retr(x, B{k}, stepsize);
0106         keyk = storedb.getNewKey();
0107         % Evaluate the cost there
0108         fxk = getCost(problem, xk, storedb, keyk);
0109         % Don't keep this point in cache
0110         storedb.remove(keyk);
0111         % Finite difference
0112         df(k) = (fxk - fx)/stepsize;
0113     end
0114     
0115     % Build the gradient approximation.
0116     gradfd = lincomb(problem.M, x, B, df);
0117     
0118 end

Generated on Mon 10-Sep-2018 11:48:06 by m2html © 2005