Parallel lasso for large-scale video concept detection

Publication Type:
Journal Article
IEEE Transactions On Multimedia, 2012, 14 (1), pp. 55 - 65
Issue Date:
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2012002987OK.pdf1.19 MB
Adobe PDF
Existing video concept detectors are generally built upon the kernel based machine learning techniques, e.g., support vector machines, regularized least squares, and logistic regression, just to name a few. However, in order to build robust detectors, the learning process suffers from the scalability issues including the high-dimensional multi-modality visual features and the large-scale keyframe examples. In this paper, we propose parallel lasso (Plasso) by introducing the parallel distributed computation to significantly improve the scalability of lasso (the l1 regularized least squares). We apply the parallel incomplete Cholesky factorization to approximate the covariance statistics in the preprocess step, and the parallel primal-dual interior-point method with the Sherman-Morrison-Woodbury formula to optimize the model parameters. For a dataset with n samples in a d-dimensional space, compared with lasso, Plasso significantly reduces complexities from the original O(d3) for computational time and O(d2) for storage space to O(h2d/m) and O(hd/m) , respectively, if the system has m processors and the reduced dimension h is much smaller than the original dimension d
Please use this identifier to cite or link to this item: