AB - © Springer Nature Switzerland AG 2019. One of the issues of training a general fuzzy min-max neural network (GFMM) on very high dimensional data is a long training time even if the number of samples is relatively low. This is a quite common problem shared by many prototype-based methods requiring frequently repeated distance or similarity calculations. This paper proposes the method of accelerating the learning algorithms of the GFMM by, first, reformulating and representing them in a format allowing for their parallel execution and subsequently leveraging the computational power of the graphics processing unit (GPU). The original implementation of GFMM is modified by matrix computations to be executed on the GPU for the very high-dimensional datasets. The empirical results on two very high-dimensional datasets indicated that the training and testing processes performed on Nvidia Quadro P5000 GPU were from 10 to 35 times faster compared to those running serially on the Xeon CPU while retaining the same classification accuracy. AU - Khuat, TT AU - Gabrys, B DA - 2019/01/01 DO - 10.1007/978-3-030-36708-4_48 EP - 595 JO - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) PY - 2019/01/01 SP - 583 TI - Accelerated training algorithms of general fuzzy min-max neural network using GPU for very high dimensional data VL - 11953 LNCS Y1 - 2019/01/01 Y2 - 2026/05/02 ER -