Our paper on l1-norm low-rank matrix approximation is accepted to IEEE Transactions on Neural Networks and Learning Systems

[2014/03/17]

The following paper is accepted to IEEE Transactions on Neural Networks and Learning Systems:

  • Efficient l1-Norm-Based Low-Rank Matrix Approximations for Large-Scale Problems Using Alternating Rectified Gradient Method by Eunwoo Kim, Minsik Lee, Chong-Ho Choi, Nojun Kwak, and Songhwai Oh

  • Abstract: Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most conventional low-rank matrix approximation methods are based on the l2-norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2-norm exaggerates the negative effect of outliers. Recently, in order to overcome this problem, various methods based on the l1-norm, such as robust principal component analysis methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1-norm that finds proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.