This post is not going to be about compressive sensing or sparse representation. I have been trying to find a field where I can use either of these approaches. I read that sparse representation can be used in face recognition. Well, I am still not sure. I am working on its validity. In the meantime, I tried to implement basic face recognition system.

I read Eigenfaces for Recognition by Matthew Turk and Alex Pentland. It's a very basic face recognition approach.

There are two modules. First module is to train the system. It takes a set of faces and generates the features (called eigenfaces) and weight vectors (or projections). The second module is for face matching. It uses these features to calculate the weight vectors for a new image and decides the face class for this new face.


Approach

  1. Principal Component Analysis (PCA) is used in this approach to reduce the dimensionality of the data. Here is a good tutorial on PCA.
  2. Suppose we have \(M\) images of size \( N \times N \). These images are reshaped into a \( N^2 \times 1 \) vectors \( \Gamma_1, \Gamma_2 \dots \Gamma_M\). Now our aim is to summarize them by projecting into a \(M'\) dimensional subspace \( M' \le M \).
  3. PCA gives an orthonormal basis for \( M \) dimensional subspace \( \mathbf{u}_k \). This basis is named as eigenfaces.
  4. Original images are transformed into its eigenfaces components (weights) using $$ \omega_k =  \mathbf{u}_k( \Gamma - \Psi) $$ where $$ \Psi = \frac{1}{M} \sum_{n=1}^{M} \; \Gamma_n$$ The reason to introduce \( \Psi \) is to make the data zero mean. More details are given in the tutorial mentioned above.
  5. A face class (or weight vectors) is defined as \( \Omega = [\omega_1, \omega_2 \dots \omega_M] \) for each individual face. We generally take average weight of these weight vectors for different images of the same face to define a face class.


Few original faces

I used 400 images. 10 per person.


Few Eigenfaces

Actually there are 400 of them since I used 400 images. But I used only 200 corresponding to 200 largest eigenvalues (sufficient to represent original images with tolerable amount of error).


Face Matching

It's really simple. We just have to calculate weight vector for the new face and compare with those of original images to find out the closest face. This way we find out the face class for a new face. An error threshold is also defined to ensure that the new image does belong to a particular face class in addition to being closest to it.


Testing

I used first 9 images of each individual face to train the system i.e. 360 images. I gave 10th face as input to face matching. See the plot for error (minimum value of error corresponds to closest face class)


For 10th image of 1st face class

It can be seen that height of first bar is minimum.


For 10th image of 8th face class

Here height of 8th bar is minimum corresponding to minimum error.


My code can be found here. Code is not structured but it'll give you a basic idea about how to write your own.

Face database is available here.