Home > examples > dominant_invariant_subspace.m

dominant_invariant_subspace

PURPOSE ^

Returns an orthonormal basis of the dominant invariant p-subspace of A.

SYNOPSIS ^

function [X, info] = dominant_invariant_subspace(A, p)

DESCRIPTION ^

 Returns an orthonormal basis of the dominant invariant p-subspace of A.

 function X = dominant_invariant_subspace(A, p)

 Input: A real, symmetric matrix A of size nxn and an integer p < n.
 Output: A real, orthonormal matrix X of size nxp such that trace(X'*A*X)
         is maximized. That is, the columns of X form an orthonormal basis
         of a dominant subspace of dimension p of A. These are thus
         eigenvectors associated with the largest eigenvalues of A (in no
         particular order). Sign is important: 2 is deemed a larger
         eigenvalue than -5.

 The optimization is performed on the Grassmann manifold, since only the
 space spanned by the columns of X matters. The implementation is short to
 show how Manopt can be used to quickly obtain a prototype. To make the
 implementation more efficient, one might first try to use the caching
 system, that is, use the optional 'store' arguments in the cost, grad and
 hess functions. Furthermore, using egrad2rgrad and ehess2rhess is quick
 and easy, but not always efficient. Having a look at the formulas
 implemented in these functions can help rewrite the code without them,
 possibly more efficiently.

 See also: dominant_invariant_subspace_complex

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 function [X, info] = dominant_invariant_subspace(A, p)
0002 % Returns an orthonormal basis of the dominant invariant p-subspace of A.
0003 %
0004 % function X = dominant_invariant_subspace(A, p)
0005 %
0006 % Input: A real, symmetric matrix A of size nxn and an integer p < n.
0007 % Output: A real, orthonormal matrix X of size nxp such that trace(X'*A*X)
0008 %         is maximized. That is, the columns of X form an orthonormal basis
0009 %         of a dominant subspace of dimension p of A. These are thus
0010 %         eigenvectors associated with the largest eigenvalues of A (in no
0011 %         particular order). Sign is important: 2 is deemed a larger
0012 %         eigenvalue than -5.
0013 %
0014 % The optimization is performed on the Grassmann manifold, since only the
0015 % space spanned by the columns of X matters. The implementation is short to
0016 % show how Manopt can be used to quickly obtain a prototype. To make the
0017 % implementation more efficient, one might first try to use the caching
0018 % system, that is, use the optional 'store' arguments in the cost, grad and
0019 % hess functions. Furthermore, using egrad2rgrad and ehess2rhess is quick
0020 % and easy, but not always efficient. Having a look at the formulas
0021 % implemented in these functions can help rewrite the code without them,
0022 % possibly more efficiently.
0023 %
0024 % See also: dominant_invariant_subspace_complex
0025 
0026 % This file is part of Manopt and is copyrighted. See the license file.
0027 %
0028 % Main author: Nicolas Boumal, July 5, 2013
0029 % Contributors:
0030 %
0031 % Change log:
0032 %
0033 %   NB Dec. 6, 2013:
0034 %       We specify a max and initial trust region radius in the options.
0035     
0036     % Generate some random data to test the function
0037     if ~exist('A', 'var') || isempty(A)
0038         A = randn(128);
0039         A = (A+A')/2;
0040     end
0041     if ~exist('p', 'var') || isempty(p)
0042         p = 3;
0043     end
0044     
0045     % Make sure the input matrix is square and symmetric
0046     n = size(A, 1);
0047     assert(isreal(A), 'A must be real.')
0048     assert(size(A, 2) == n, 'A must be square.');
0049     assert(norm(A-A', 'fro') < n*eps, 'A must be symmetric.');
0050     assert(p<=n, 'p must be smaller than n.');
0051     
0052     % Define the cost and its derivatives on the Grassmann manifold
0053     Gr = grassmannfactory(n, p);
0054     problem.M = Gr;
0055     problem.cost = @(X)    -trace(X'*A*X);
0056     problem.grad = @(X)    -2*Gr.egrad2rgrad(X, A*X);
0057     problem.hess = @(X, H) -2*Gr.ehess2rhess(X, A*X, A*H, H);
0058     
0059     % Execute some checks on the derivatives for early debugging.
0060     % These can be commented out.
0061     % checkgradient(problem);
0062     % pause;
0063     % checkhessian(problem);
0064     % pause;
0065     
0066     % Issue a call to a solver. A random initial guess will be chosen and
0067     % default options are selected except for the ones we specify here.
0068     options.Delta_bar = 8*sqrt(p);
0069     [X, costX, info, options] = trustregions(problem, [], options); %#ok<ASGLU>
0070     
0071     fprintf('Options used:\n');
0072     disp(options);
0073     
0074     % For our information, Manopt can also compute the spectrum of the
0075     % Riemannian Hessian on the tangent space at (any) X. Computing the
0076     % spectrum at the solution gives us some idea of the conditioning of
0077     % the problem. If we were to implement a preconditioner for the
0078     % Hessian, this would also inform us on its performance.
0079     %
0080     % Notice that (typically) all eigenvalues of the Hessian at the
0081     % solution are positive, i.e., we find an isolated minimizer. If we
0082     % replace the Grassmann manifold by the Stiefel manifold, hence still
0083     % optimizing over orthonormal matrices but ignoring the invariance
0084     % cost(XQ) = cost(X) for all Q orthogonal, then we see
0085     % dim O(p) = p(p-1)/2 zero eigenvalues in the Hessian spectrum, making
0086     % the optimizer not isolated anymore.
0087     if Gr.dim() < 512
0088         evs = hessianspectrum(problem, X);
0089         stairs(sort(evs));
0090         title(['Eigenvalues of the Hessian of the cost function ' ...
0091                'at the solution']);
0092         xlabel('Eigenvalue number (sorted)');
0093         ylabel('Value of the eigenvalue');
0094     end
0095 
0096 end

Generated on Fri 08-Sep-2017 12:43:19 by m2html © 2005