Edge Computing-Enabled Deep Learning for Real-time Video Optimization in IIoT
- Publisher:
- Institute of Electrical and Electronics Engineers (IEEE)
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Industrial Informatics, 2021, 17, (4), pp. 2842-2851
- Issue Date:
- 2021-04-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Edge_Computing-Enabled_Deep_Learning_for_Real-time_Video_Optimization_in_IIoT.pdf | Published version | 1.61 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Real-time multimedia applications have gained immense popularity in the industrial Internet of Things (IIoT) paradigm. Due to the impact of the complex industrial environment, the transmission of video streaming is usually unstable. In the duration of a low bandwidth transmission, existing optimization methods often reduce the original resolution of some frames in a random way to avoid the video interruption. If the key frames with some important content are selected to be transmitted with a low resolution, it will greatly reduce the effect of industrial supervision. In view of this challenge, a real-time video streaming optimization method by reducing the number of video frames transmitted in the IIoT environment is proposed. Concretely, a deep learning-based object detection algorithm is recruited to effectively select the key frames in our method. The key frames with the original resolution will be transmitted along with audio data. As some nonkey frames are selectively discarded, it is helpful for smooth network transmitting with fewer bandwidth requirements. Moreover, we employ edge servers to run the object detection algorithm, and adjust video transmission flexibly. Extensive experiments are conducted to validate the effectiveness, and dependability of our method.
Please use this identifier to cite or link to this item: