Comparing Ensemble Learning Techniques on Data Transmission Reduction for IoT Systems

Publisher:
Springer Nature
Publication Type:
Conference Proceeding
Citation:
Lecture Notes in Networks and Systems, 2023, 700 LNNS, pp. 72-85
Issue Date:
2023-01-01
Filename Description Size
978-3-031-33743-7_6.pdfPublished version1.16 MB
Adobe PDF
Full metadata record
The Internet of Things (IoT) systems include a massive number of connected devices. The communication between these devices requires a huge amount of communication. Thus, it is very crucial for an IoT system to reduce communication volumes by minimizing the amount of data transmission. There are several approaches to achieving the goal of data transmission reduction, such as data compression and dual prediction. Dual prediction schemes received more attention in comparison to data compression. The mainstream techniques for proposing dual prediction schemes in the literature can be classified into two main groups, namely, filter-based and deep learning-based methods. The filter-based methods, such as the 1-D Kalman filter, are lightweight in terms of running time and the model’s memory requirements. On the other hand, deep learning-based methods require more memory space and training time, but deep learning models are more accurate as predictive models in comparison to filter-based methods in dual prediction schemes. There are very limited efforts to utilize machine learning methods in dual prediction schemes as a compromise between the aforementioned mainstream techniques. In this work, we extended one of these limited efforts, which utilizes boosting ensemble learning as a machine learning predictive method in a proposed dual prediction scheme. The current work proposes exploring the performance gap between the three main approaches to ensemble learning, namely, boosting, stacking, and bagging. The three proposed ensemble learning models are evaluated on a real dataset and compared against state-of-the-art methods. The obtained results show that among the ensemble learning models, boosting and bagging models are better than the stacking model, but the three proposed ensemble learning models outperformed the state-of-the-art methods of comparison. For instance, the average numbers of mispredictions for all of the experiments were 580, 551, and 1,106 for boosting, bagging, and stacking, respectively.
Please use this identifier to cite or link to this item: