Low-rank matrices

August 5, 2014 — September 29, 2023

feature construction
functional analysis
high d
linear algebra
networks
probability
signal processing
sparser than thou
statistics

Assumed audience:

People with undergrad linear algebra

Figure 1

Here is a useful form that that some matrix might possess: \[\mathrm{K}= \mathrm{Z} \mathrm{Z}^{H}\] where \(\mathrm{K}\in\mathbb{R}^{N\times N}\), \(\mathrm{Z}\in\mathbb{R}^{N\times D}\) with \(D\ll N\). Such matrices clearly Hermitian, and arise in, e.g. covariance estimation. I write \(\mathrm{Z}^{H}\) for the conjugate transpose of \(\mathrm{Z}\) and use that here because sometimes I want to think of \(\mathrm{Z}\) as a complex matrix, and last I checked, most stuff here generalised to that case easily under conjugate transposition. YMMV.

We call matrices of this form low-rank. Their cousins, the low-rank-plus-diagonal matrices, are also useful.

Here are some minor results about them that I needed to write down somewhere.

1 Pseudo-inverses

Since \(D\ll N\) the matrices in question here are trivially singular and so have no inverses. But they might have a pseudo inverse or some kind of generalised inverse.

Consider the Moore-Penrose pseudo inverse of \(\mathrm{K}\) which we write \(\mathrm{K}^+\). The famous way of constructing it for general \(\mathrm{K}\) is by taking a SVD of \(\mathrm{K}=\mathrm{U}_{\mathrm{K}}\mathrm{S}_{\mathrm{K}}\mathrm{V}_{\mathrm{K}}^{H}\) where \(\mathrm{U}_{\mathrm{K}}\) and \(\mathrm{V}_{\mathrm{K}}\) are unitary and \(\mathrm{S}_{\mathrm{K}}\) is diagonal. Then we define \(\mathrm{S}_{\mathrm{K}}^+\) to be the pseudo-inverse of the diagonal matrix of singular values, which we construct by setting all non-zero entries to their own reciprocal, but otherwise leave at 0. We have sneakily decided that pseudo-inverse of a diagonal matrix is easy; we just take the reciprocal of the non-zero entries. This turns out to do the right thing, if you check it, and it does not even sound crazy, but also it is not, to me at least, totally obvious.

Next, the pseudo-inverse of the whole thing is \(\mathrm{K}^+=\mathrm{V}_{\mathrm{K}}\mathrm{S}_{\mathrm{K}}^+\mathrm{U}_{\mathrm{K}}^{H}\), we claim. If we check the object we create by this procedure, we discover that it satisfies the right properties.

Can we construct this pseudo inverse specifically for a low-rank matrix? Let’s try taking the SVD of the associated low rank factors and see what happens. Let \(\mathrm{Z}=\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}\mathrm{V}_{\mathrm{Z}}^{H}\) be the SVD of \(\mathrm{Z}\) so that \(\mathrm{U}_{\mathrm{Z}}\) and \(\mathrm{V}_{\mathrm{Z}}\) are unitary and \(\mathrm{S}_{\mathrm{Z}}\) is diagonal. Then \(\mathrm{K}=\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{V}_{\mathrm{Z}}^{H}\mathrm{V}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H}=\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H}\).

Next, the pseudo-inverse of \(\mathrm{Z}^+=\mathrm{V}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H}\) (the SVD construction).

Checking the matrix cookbook (Petersen and Pedersen 2012), we see that \[\begin{aligned} (\mathrm{Z}\mathrm{Z}^\dagger)^+ &=\mathrm{Z}^{+\dagger}\mathrm{Z}^{+} \end{aligned}\] so \[\begin{aligned} \mathrm{Z}^{+^\dagger}\mathrm{Z}^+ &=(\mathrm{V}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H})^{H}(\mathrm{V}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H})\\ &=\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{V}_{\mathrm{Z}}^{H}\mathrm{V}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H}\\ &=\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H} \end{aligned}\] It looks like we should be taking \(\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\) to be the low-rank factor of the pseudo-inverse of \(\mathrm{Z}\), right?

We might want to check that the desired pseudo-inverse properties hold. Recall, the Moore-Penrose pseudo inverse of a matrix \(\mathrm{K}\) is the matrix \(\mathrm{K}^{+}\) that fulfils

  1. \(\mathrm{K}\mathrm{K}^{+} \mathrm{K}=\mathrm{K}\)
  2. \(\mathrm{K}^{+} \mathrm{K} \mathrm{K}^{+}=\mathrm{K}^{+}\)
  3. \(\mathrm{K}\mathrm{K}^{+}\) symmetric
  4. \(\mathrm{K}^{+} \mathrm{K}\) symmetric

The last two are immediate. We might want to check the first two. Let us consider the first one, by way of example. Trying to calculate its properties by iterating the various pseudo-inverse rules is tedious, so let us consider what happens if we use the constructive form for the pseudoinverse, \(\mathrm{K}= \mathrm{Z} \mathrm{Z}^{H}\). We can deduce more: \[\begin{aligned} \mathrm{K}\mathrm{K}^{+} \mathrm{K} &=\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}\mathrm{U}_{\mathrm{Z}}^{H} \mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H} \mathrm{K}\\ &=\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}} \mathrm{S}_{\mathrm{Z}}^+\mathrm{S}_{\mathrm{Z}}^+\mathrm{U}_{\mathrm{Z}}^{H} \mathrm{K}\\ &=\mathrm{U}_{\mathrm{Z}}\mathrm{I}_{\mathrm{Z}}\mathrm{U}_{\mathrm{Z}}^{H} \mathrm{K}\\ &=\mathrm{K}. \end{aligned}\] Here \(\mathrm{I}_{\mathrm{Z}}\) was a diagonal matrix with only as many non-zero entries as \(\mathrm{Z}\) had column rank, which ends up not poisoning the equality, I think, since \(\mathrm{K}\) had only that much rank anyway.

Anyway, this sloppy reasoning should encourage us to believe we have done nothing too silly here. I presume a proof for property 2 would be similar, but I have not actually done it. (Homework problem).

Is this pseudo-inverse low rank, though? Looks like it. In particular, we know that \(\mathrm{Z}\) had only \(N\) columns, and so \(\mathrm{S}_{\mathrm{Z}}\) has at most \(N\) non-zero entries. So we can take the thin SVD and know that that it will only preserve at most \(N\) columns of \(\mathrm{S}_{\mathrm{Z}}\), which is to say that we may as well take \(\mathrm{S}_{\mathrm{Z}}^+\) to be \(N\times N\), \(\mathrm{U}_{\mathrm{Z}}\) to be \(D\times N\) and the low-rank factor \(\mathrm{U}_{\mathrm{Z}}\mathrm{S}_{\mathrm{Z}}^+\) to be \(D\times N\).

Bonus detail: The SVD is not necessarily unique, even the reduced SVD, if there are singular values that are repeated. I think that for my purposes this is OK to ignore, but noting it here in anticipation of weird failure modes in the future.

tl;dr: pseudo-inverses of low-rank matrices are low-rank, may be found by SVD.

Was that not exhausting? Let us state the following pithy facts from Searle (2014):

The matrix \(\mathrm{X}^{H} \mathrm{X}\) plays an important role in statistics, usually involving a generalized inverse thereof, which has several useful properties. Thus, for \(\mathrm{G}\) satisfying \[ \mathrm{X}^{H} \mathrm{X G X}^{H} \mathrm{X}=\mathrm{X}^{H} \mathrm{X}, \] \(\mathrm{G}^{H}\) is also a generalized inverse of \(\mathrm{X}^{H} \mathrm{X}\) (and \(\mathrm{G}\) is not necessarily symmetric). Also,

  1. \(\mathrm{XGX}^{H} \mathrm{X}=\mathrm{X}\);
  2. \(\mathrm{X G X}^{H}\) is invariant to \(\mathrm{G}\);
  3. \(\mathrm{XGX}^{H}\) is symmetric, whether or not \(\mathrm{G}\) is;
  4. \(\mathrm{X G X}^{H}=\mathrm{X X}^{+}\)for \(\mathrm{X}^{+}\)being the Moore-Penrose inverse of \(\mathrm{X}\).

Further, Searle constructs the low-rank pseudo-inverse as \[ (\mathrm{X} \mathrm{X}^{H})^+ =\mathrm{X}\left(\mathrm{X}^{H} \mathrm{X}\right)^{-2} \mathrm{X}^{H}=\mathrm{Y} \mathrm{Y}^{H} \] for \(\mathrm{Y}:=\mathrm{X}\left(\mathrm{X}^{H} \mathrm{X}\right)^{-1}\).

2 Distances

2.1 Frobenius

Suppose we want to measure the Frobenius distance between \(\mathrm{U}\mathrm{U}^{H}\) and \(\mathrm{R}\mathrm{R}^{H}\). We recall that we might expect things to be nice if they are exactly low rank because, e.g. \[ \begin{aligned} \|\mathrm{U}\mathrm{U}^{H}\|_F^2 =\operatorname{tr}\left(\mathrm{U}\mathrm{U}^{H}\mathrm{U}\mathrm{U}^{H}\right) =\|\mathrm{U}^{H}\mathrm{U}\|_F^2. \end{aligned} \] Indeed things are nice, and the answer may be found without forming the full matrices: \[ \begin{aligned} &\|\mathrm{U}\mathrm{U}^{H}-\mathrm{R}\mathrm{R}^{H}\|_F^2\\ &=\left\|\mathrm{U}\mathrm{U}^{H} -\mathrm{R}\mathrm{R}^{H} \right\|_{F}^2\\ &=\left\|\mathrm{U}\mathrm{U}^{H}+i\mathrm{R}i\mathrm{R}^{H} \right\|_{F}^2\\ &=\left\|\begin{bmatrix} \mathrm{U} &i\mathrm{R}\end{bmatrix}\begin{bmatrix} \mathrm{U} &i\mathrm{R}\end{bmatrix}^{H} \right\|_{F}^2\\ &=\operatorname{Tr}\left(\begin{bmatrix} \mathrm{U} &i\mathrm{R}\end{bmatrix}\begin{bmatrix} \mathrm{U} &i\mathrm{R}\end{bmatrix}^{H}\begin{bmatrix} \mathrm{U} &i\mathrm{R}\end{bmatrix}\begin{bmatrix} \mathrm{U} &i\mathrm{R}\end{bmatrix}^{H}\right)\\ &=\operatorname{Tr}\left(\left(\mathrm{U}\mathrm{U}^{H} -\mathrm{R}\mathrm{R}^{H}\right)\left(\mathrm{U}\mathrm{U}^{H} -\mathrm{R}\mathrm{R}^{H}\right)\right)\\ &=\operatorname{Tr}\left(\mathrm{U}\mathrm{U}^{H}\mathrm{U}\mathrm{U}^{H}\right) -2\operatorname{Tr}\left(\mathrm{U}\mathrm{U}^{H}\mathrm{R}\mathrm{R}^{H}\right) + \operatorname{Tr}\left(\mathrm{R}\mathrm{R}^{H}\mathrm{R}\mathrm{R}^{H}\right)\\ &=\left\|\mathrm{U}^{H}\mathrm{U}\right\|^2_F -2\left\|\mathrm{U}^{H}\mathrm{R}\right\|^2_F + \left\|\mathrm{R}^{H}\mathrm{R}\right\|^2_F. \end{aligned} \]

3 References

Akimoto. 2017. Fast Eigen Decomposition for Low-Rank Matrix Approximation.”
Babacan, Luessi, Molina, et al. 2012. Sparse Bayesian Methods for Low-Rank Matrix Estimation.” IEEE Transactions on Signal Processing.
Bach, Francis R. 2013. Sharp Analysis of Low-Rank Kernel Matrix Approximations. In COLT.
Bach, C, Ceglia, Song, et al. 2019. Randomized Low-Rank Approximation Methods for Projection-Based Model Order Reduction of Large Nonlinear Dynamical Problems.” International Journal for Numerical Methods in Engineering.
Brand. 2006. Fast Low-Rank Modifications of the Thin Singular Value Decomposition.” Linear Algebra and Its Applications, Special Issue on Large Scale Linear and Nonlinear Eigenvalue Problems,.
Chen, and Chi. 2018. Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation: Recent Theory and Fast Algorithms via Convex and Nonconvex Optimization.” IEEE Signal Processing Magazine.
Chi, Lu, and Chen. 2019. Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview.” IEEE Transactions on Signal Processing.
Chow, and Saad. 1997. Approximate Inverse Techniques for Block-Partitioned Matrices.” SIAM Journal on Scientific Computing.
Chung, and Chung. 2014. An Efficient Approach for Computing Optimal Low-Rank Regularized Inverse Matrices.” Inverse Problems.
Cichocki, Lee, Oseledets, et al. 2016. Low-Rank Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Problems: Perspectives and Challenges PART 1.” arXiv:1609.00893 [Cs].
Fasi, Higham, and Liu. 2023. Computing the Square Root of a Low-Rank Perturbation of the Scaled Identity Matrix.” SIAM Journal on Matrix Analysis and Applications.
Gross. 2011. Recovering Low-Rank Matrices From Few Coefficients in Any Basis.” IEEE Transactions on Information Theory.
Hager. 1989. Updating the Inverse of a Matrix.” SIAM Review.
Harbrecht, Peters, and Schneider. 2012. On the Low-Rank Approximation by the Pivoted Cholesky Decomposition.” Applied Numerical Mathematics, Third Chilean Workshop on Numerical Analysis of Partial Differential Equations (WONAPDE 2010),.
Hastie, Mazumder, Lee, et al. 2015. Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares.” In Journal of Machine Learning Research.
Kannan. 2016. Scalable and Distributed Constrained Low Rank Approximations.”
Khan. 2008. Updating Inverse of a Matrix When a Column Is Added/Removed.”
Kumar, and Shneider. 2016. Literature Survey on Low Rank Approximation of Matrices.” arXiv:1606.06511 [Cs, Math].
Liberty, Woolfe, Martinsson, et al. 2007. Randomized Algorithms for the Low-Rank Approximation of Matrices.” Proceedings of the National Academy of Sciences.
Lin. 2016. A Review on Low-Rank Models in Signal and Data Analysis.”
Nakatsukasa. 2019. The Low-Rank Eigenvalue Problem.”
Nowak, and Litvinenko. 2013. Kriging and Spatial Design Accelerated by Orders of Magnitude: Combining Low-Rank Covariance Approximations with FFT-Techniques.” Mathematical Geosciences.
Petersen, and Pedersen. 2012. The Matrix Cookbook.”
Saad. 2003. Iterative Methods for Sparse Linear Systems: Second Edition.
Saul. 2023. A Geometrical Connection Between Sparse and Low-Rank Matrices and Its Application to Manifold Learning.” Transactions on Machine Learning Research.
Searle. 2014. Matrix Algebra.” In Wiley StatsRef: Statistics Reference Online.
Searle, and Khuri. 2017. Matrix Algebra Useful for Statistics.
Seeger, ed. 2004. Low Rank Updates for the Cholesky Decomposition.
Seshadhri, Sharma, Stolman, et al. 2020. The Impossibility of Low-Rank Representations for Triangle-Rich Complex Networks.” Proceedings of the National Academy of Sciences.
Shi, Zheng, and Yang. 2017. Survey on Probabilistic Models of Low-Rank Matrix Factorizations.” Entropy.
Spantini, Cui, Willcox, et al. 2017. Goal-Oriented Optimal Approximations of Bayesian Linear Inverse Problems.” SIAM Journal on Scientific Computing.
Spantini, Solonen, Cui, et al. 2015. Optimal Low-Rank Approximations of Bayesian Linear Inverse Problems.” SIAM Journal on Scientific Computing.
Sundin. 2016. “Bayesian Methods for Sparse and Low-Rank Matrix Problems.”
Tropp, Yurtsever, Udell, et al. 2016. Randomized Single-View Algorithms for Low-Rank Matrix Approximation.” arXiv:1609.00048 [Cs, Math, Stat].
———, et al. 2017. Practical Sketching Algorithms for Low-Rank Matrix Approximation.” SIAM Journal on Matrix Analysis and Applications.
Udell, and Townsend. 2019. Why Are Big Data Matrices Approximately Low Rank? SIAM Journal on Mathematics of Data Science.
Woolfe, Liberty, Rokhlin, et al. 2008. A Fast Randomized Algorithm for the Approximation of Matrices.” Applied and Computational Harmonic Analysis.
Yang, Fang, Duan, et al. 2018. Fast Low-Rank Bayesian Matrix Completion with Hierarchical Gaussian Prior Models.” IEEE Transactions on Signal Processing.
Yin, Gao, and Lin. 2016. Laplacian Regularized Low-Rank Representation and Its Applications.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Zhang, Wang, and Gu. 2017. Stochastic Variance-Reduced Gradient Descent for Low-Rank Matrix Recovery from Linear Measurements.” arXiv:1701.00481 [Stat].
Zhou, and Tao. 2011. GoDec: Randomized Low-Rank & Sparse Matrix Decomposition in Noisy Case.” In Proceedings of the 28th International Conference on International Conference on Machine Learning. ICML’11.