Accelerated training algorithms of general fuzzy min-max neural network using GPU for very high dimensional data
- Publication Type:
- Conference Proceeding
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, 11953 LNCS pp. 583 - 595
- Issue Date:
|Khuat-Gabrys2019_Chapter_AcceleratedTrainingAlgorithmsO.pdf||Published version||302.8 kB|
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© Springer Nature Switzerland AG 2019. One of the issues of training a general fuzzy min-max neural network (GFMM) on very high dimensional data is a long training time even if the number of samples is relatively low. This is a quite common problem shared by many prototype-based methods requiring frequently repeated distance or similarity calculations. This paper proposes the method of accelerating the learning algorithms of the GFMM by, first, reformulating and representing them in a format allowing for their parallel execution and subsequently leveraging the computational power of the graphics processing unit (GPU). The original implementation of GFMM is modified by matrix computations to be executed on the GPU for the very high-dimensional datasets. The empirical results on two very high-dimensional datasets indicated that the training and testing processes performed on Nvidia Quadro P5000 GPU were from 10 to 35 times faster compared to those running serially on the Xeon CPU while retaining the same classification accuracy.
Please use this identifier to cite or link to this item: