Regarding Linear Isometries

linear algebra
definite matrix
machine learning
jl lemma
ads
explainer
publish

There is no linear isometry from \(\displaystyle \mathbb{R}^{d}\) to \(\displaystyle \mathbb{R}^{k}\), with \(\displaystyle d >k\). Alternatively, there is no matrix \(\displaystyle R\) of dimension \(\displaystyle k\times d\), with \(\displaystyle d >k\), such that \(\displaystyle ||Rx||^{2} =||x||^{2}\) for all \(\displaystyle x\in \mathbb{R}^{d}\).

Proof-1

This idea for this proof is inspired by my colleague, Vivek Sivaramakrishnan. Since \(\displaystyle R\) is \(\displaystyle k\times d\) with \(\displaystyle k< d\), the nullity of \(\displaystyle R\) is non-zero. That is, there is some non-zero vector \(\displaystyle x\in \mathbb{R}^{d}\) such that \(\displaystyle Rx=0\). If \(\displaystyle Rx=0\), then \(\displaystyle ||Rx||^{2} =0\). However, \(\displaystyle ||x||\neq 0\). Hence, we see that \(\displaystyle R\) cannot be a norm-preserving map.

Proof-1

A slightly circuitous way of proving this result by contradiction. If possible, let there exist such a norm-preserving linear map from \(\displaystyle \mathbb{R}^{d}\) to \(\displaystyle \mathbb{R}^{k}\). If \(\displaystyle R\) preserves norms, then for any vector \(\displaystyle x\in \mathbb{R}^{d}\), we have:

\[ \begin{equation*} \begin{aligned} ||Rx||^{2} & =||x||^{2} \end{aligned} \end{equation*} \]

Using the fact that \(\displaystyle ||Rx||^{2} =( Rx)^{T}( Rx)\), we have:

\[ \begin{equation*} \begin{aligned} ||x||^{2} & =x^{T}\left( R^{T} R\right) x \end{aligned} \end{equation*} \]

Call \(\displaystyle M=R^{T} R\). The above equation then becomes:

\[ \begin{equation*} ||x||^{2} =x^{T} Mx,\ \forall x\in \mathbb{R}^{d} \end{equation*} \tag{1}\]

Let the rank of \(\displaystyle R\) be \(\displaystyle r\). We know that the rank of \(M\) is the same as the rank of \(R\). Therefore, \(\text{rank}(M) = r\). Importantly, we have \(r \leqslant k < d\).

Also note that \(\displaystyle M\) is a symmetric, positive semi-definite matrix of shape \(\displaystyle d\times d\). So it has a full set of eigenvalues:

\[ \displaystyle \lambda _{1} \geqslant \cdots \geqslant \lambda _{r} >\lambda _{r+1} =\cdots \lambda _{d} =0 \]

Consider one of the zero eigenvalues with corresponding eigenvector \(\displaystyle v\). Then:

\[ \begin{equation*} \begin{aligned} v^{T} Mv & =v^{T}( Mv)\\ & =v^{T}( 0v)\\ & =0 \end{aligned} \end{equation*} \]

However, from Equation 1, \(\displaystyle v^{T} Mv=||v||^{2}\) and \(\displaystyle ||v||^{2}\) cannot be zero, since \(\displaystyle v\) is an eigenvector. That leads us to a contradiction and we are done with the proof.