A Survey on Concept Factorization: From Shallow to Deep Representation Learning

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Information Processing and Management, 2021, 58, (3)
Issue Date:
2021-05-01
Filename Description Size
1-s2.0-S030645732100042X-main.pdfPublished version9.21 MB
Adobe PDF
Full metadata record
The quality of obtained features by representation learning determines the performance of a learning algorithm and subsequent application tasks (e.g., high-dimensional data clustering). As an effective paradigm for learning representations, Concept Factorization (CF) has attracted a great deal of interests in the areas of machine learning and data mining for over a decade. Moreover, lots of effective CF-based methods have been proposed based on different perspectives and properties, but it still remains not easy to grasp the essential connections and figure out the underlying explanatory factors from current studies. In this paper, we therefore survey the recent advances on CF methodologies and the potential benchmarks by categorizing and summarizing current methods. Specifically, we first review the root CF method, and then explore the advancement of CF-based representation learning ranging from shallow to deep/multilayer cases. We also introduce the potential application areas of CF-based methods. Finally, we point out some future directions for studying the CF-based representation learning. Overall, this survey provides an insightful overview of both theoretical basis and current developments in the field of CF, which can also help the interested researchers to understand the current trends of CF and find the most appropriate CF techniques to deal with particular applications.
Please use this identifier to cite or link to this item: