Introduction
Principal Component Analysis (PCA) is a famous machine learning algorithm which applies lossy compression to raw data for storage or training and reconstructs the original data from the compressed data if necessary. It can be easily derived using the knowledge of Eigendecomposition and Quadratic Optimization described in my last two blogs. We would like to introduce the problem solved by PCA and derive this algorithm by hand in this blog.
Problem Definition
Suppose we have a collection of data samples . Each sample can be treated as a point in space and the -th element represents the value of some feature of the sample . All data points can be represented as a matrix . In this case, the matrix may become very large when the number of features increases, making it difficult and time-consuming to process the raw data. What’s worse, irrelevant and redundant features (i.e., features that are highly correlated) in a high dimensional dataset provide little useful information for our analysis and even produce noise that leads to low accuracy.
To address this problem, dimension of data has to be reduced. One approach is transforming each data point into a code vector , where and thus compress the matrix into a lower dimensinal matrix . Therefore, we need to define an encoding function that produces the code for any input, . Besides, in order to reconstruct the original data when necessary, it is important to define a decoding function that produces the reconstructed input given its code, .
Derivations
In this section, we derive the PCA algorithm through computing optimal code for decoding and encoding function. We first figure out the decoding function since PCA is mainly defined by the choice of the decoder. And then we utilize the decoding function to decide the encoding function. Finally, we obtain the solution of PCA by optimizing a quadratic function.
Decoding Function
To simplify the computation, we define the decoder as a matrix and use matrix multiplication to map the code vector back to . Hence, the decoding function is defined as follows:
In order to keep the compression problem easy, PCA constraints the columns of to be orthogonal to each other. Besides, we constraint all the columns of to have unit norm so that there is a unique solution for the decoding function.
Encoding Function
For any input data point , the optimal corresponding code vector is whose deconding well reconstructs the original point . This implies that the difference between and must be as small as possible. In PCA, we use the square norm to find the :
According to the definition of the square norm, we obtain:
Note that since is a scalar, and since the columns of are orthogonal and have unit norm. We omit the first term because it does not depend on :
The optimization problem above is a quadratic function about $\mathbf{c}$, which can be solved by vector calculus:
Therefore, the encoding function is:
and the decoding (reconstruction) function is:
Solution of PCA
The solution of PCA is the encoding and decoding matrix . In order to find the optimal matrix , we minimize the distance between inputs and reconstructions for all data points:
where , denotes the square Frobenius norm of the matrix :
In order to solve the optimization problem above, we need to use the Trace Operator, which gives the sum of all the diagonal entries of a matrix:
The trace operator has many useful propertities:
- The trace operator provides an alternative way of writing the Frobenius norm of a matrix: .
- The trace operator is invariant to the transpose operator: .
- The trace of a square matrix composed of many factors is also invariant to moving the last factor into the first position, if the shapes of the corresponding matrices allow the resulting product to be defined: , or more generally, . This can be called the cycle rule.
- A scalar is its own trace: .
We now utilize the trace operator to simplify the expression being optimized:
Let be the symmetric matrix we obtain. We use and to denote the -th row and -th column of a matrix , respectively. We expand the expression and get:
Thus the expression can be expanded as:
Hence, obtain its maximum value when each term is maximized. Let be the -th column of , then . According to the Rayleigh–Ritz theorem we introduce in the Quadratic Optimization blog, the solution to the following quadratic optimization problem
is the eigenvector of the symmetric matrix whose corresponding eigenvalue is the maximum eigenvalue. Hence, also following the corollary of the Rayleigh–Ritz theorem, to maximize the value of given , the matrix is given by the eigenvectors corresponding to the largest eigenvalues of the symmetric matrix .
Finally, is the compressed matrix of the raw data matrix.
Reduce Computational Complexity
We now consider the relationship between the eigenpairs of and , where . When , which is very common in some applications like face recognition, it will require too much memory to store the matrix and computing its eigenpairs will be a horrendous task. Fortunately, we can find the eigenpairs of through computing the eigenpairs of .
Suppose that is the eigenvector of the matrix with corresponding eigenvalue , then we have:
We can see that when , is an eigenvector of with corresponding eigenvalue . However, when , it can not be an eigenvector and we have , implying since is an eigenvector of . Hence, we conclude that for a matrix , the eigenvalues of are the eigenvalues of plus additional 0’s when ; and the eigenvalues of are the eigenvalues of plus additional 0’s when .
Finally, when we meet a data matrix where , we can first compute the eigenpairs of and multiply each eigenvector by (i.e., ) to obtain the eigenvectors of . In PCA, we only need to select the largest eigenvector of for multiplication and thus reduce the computation complexity to , which is much less than of computing the eigenpairs of . (Remember that is a constant in PCA and it needs time to compute the eigenpairs of an symmetric matrix).
PCA Algorithm
Based on the derivations in the last section, we can now develop the PCA algorithm:
- Given a set of data sample , create the raw data matrix .
- Compute all eigenpairs of the symmetric matrix . When the number of columns are much larger than the number of rows, we compute the eigenpairs of and use them to find the largest eigenpairs of .
- Use eigenvectors corresponding to largest eigenvalues as columns to form the eigenmatrix .
- Apply matrix multiplication to compress the raw data.
- Use to approximately reconstruct the raw data when necessary.
Conclusion
In this blog, we define the problem that can be solved by PCA and derive the PCA algorithm by hand. PCA is the core algorithm of many applications such as data compression and using eigenfaces for face recognition. Moreover, from a statistical point of view, PCA actually projects the raw data points onto some directions such that the projections have maximum variance. We’ll explain the PCA through specific experiments and in-depth analysis in later blogs.