A flexible and effective linearization method for subspace learning

Publisher:
Springer
Publication Type:
Chapter
Citation:
Graph Embedding for Pattern Analysis, 2013, pp. 177 - 203
Issue Date:
2013-01-01
Full metadata record
Files in This Item:
Filename Description Size
10.1007%2F978-1-4614-4457-2_8.pdfPublished version561.05 kB
Adobe PDF
© Springer Science+Business Media New York 2013. In the past decades, a large number of subspace learning or dimension reduction methods [2,16,20,32,34,37,44] have been proposed. Principal component analysis (PCA) [32] pursues the directions of maximum variance for optimal reconstruction. Linear discriminant analysis (LDA) [2], as a supervised algorithm, aims to maximize the inter-class scatter and at the same timeminimize the intra-class scatter. Due to utilization of label information, LDA is experimentally reported to outperform PCA for face recognition, when sufficient labeled face images are provided [2] .
Please use this identifier to cite or link to this item: