LSV-based tail inequalities for sums of random matrices

Publication Type:
Journal Article
Citation:
Neural Computation, 2017, 29 (1), pp. 247 - 262
Issue Date:
2017-01-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
ContentServer (2).pdfPublished Version202.55 kB
Adobe PDF
© 2016 Massachusetts Institute of Technology. The techniques of random matrices have played an important role in many machine learning models. In this letter, we present a new method to study the tail inequalities for sums of random matrices. Different from other work (Ahlswede & Winter, 2002; Tropp, 2012; Hsu, Kakade, & Zhang, 2012), our tail results are based on the largest singular value (LSV) and independent of the matrix dimension. Since the LSV operation and the expectation are noncommutative, we introduce a diagonalization method to convert the LSV operation into the trace operation of an infinitely dimensional diagonal matrix. In thisway,we obtain another version of Laplace-transform bounds and then achieve the LSV-based tail inequalities for sums of random matrices.
Please use this identifier to cite or link to this item: