Improving Pedestrian Detection in Low-Visibility Conditions: Fusing Visual and Infrared Data with Deep Learning
Keywords:
Autonomous vehicles, pedestrian detection, infrared vision, millimeter-wave radar, YoloV5, deep learning, Squeeze layer, Extended Kalman Filter, multi-modal data,Abstract
With the increasing demand for autonomous vehicles and higher safety standards, developing accurate pedestrian detection systems that perform well in all environmental conditions, especially at night, has become critical. Traditional sensor-based systems, such as LIDAR and radar, are often inadequate in low visibility environments, prompting the need for AI-based solutions. This research proposes a pedestrian detection system that integrates infrared vision and millimeter-wave (MMW) radar data with an enhanced
deep learning model. By utilizing an improved version of YoloV5, equipped with a Squeeze layer for attention, the system effectively extracts and classifies image features. Additionally, an Extended Kalman Filter is employed for accurate pedestrian localization. The fusion of these modalities into the enhanced YoloV5 model significantly improves detection accuracy and robustness, making it more effective in real time pedestrian detection under challenging conditions.
References
Li, G.; Xie, H.; Yan, W.; Chang, Y.; Qu, X. Detection of Road Objects with Small Appearance in Images for Autonomous Driving in Various Traffic Situations Using a Deep Learning Based Approach. IEEE Access 2020, 8, 211164–211172.
Liu, Y.; Chen, X.; Wang, Z.; Wang, Z.J.; Ward, R.K.; Wang, X. Deep learning for pixel level image fusion: Recent advances and future prospects. Inf. Fusion 2017, 42, 158–173.
Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2016, 33, 100–112.