In recent years, the number of vehicles equipped with in-vehicle cameras has been increasing. However, most of the images from in-vehicle cameras of ordinary vehicles are not effectively utilized, except for some that are submitted in the event of a traffic accident. With proper use of in-vehicle camera images, they can be used for various purposes, such as understanding the size of traffic congestion and reducing traffic accidents.
It is essential to construct a fast and inexpensive system to collect, process, and utilize a huge amount of camera images. With the recent development of deep learning, it has become possible to recognize objects in video images, such as other vehicles and pedestrians in the image, which exist in the vicinity of the own vehicle. However, in order to determine the distance between the vehicle and the surrounding objects, the speed of the vehicle, and the positional relationship between the vehicle and the surrounding objects, camera characteristics such as the angle of view, installation height, installation angle, and orientation are required, and a system that requires these inputs for each vehicle is not realistic. In addition, while lane recognition technology used in automated vehicles can be used to determine the positional relationship between the vehicle and surrounding objects, it is difficult for many ordinary vehicles to achieve the same lane recognition as an automated vehicle due to the difference in the number and performance of sensors and cameras.
Therefore, in this project, we propose a method to detect multiple lanes as a reference for the positional relationship between the vehicle and surrounding objects based only on video images captured by a monocular camera in order to understand the positional relationship between the vehicle and surrounding objects and to estimate the distance between the vehicle and surrounding objects when understanding the surrounding situation.
Related Papers
- Yukihiro Tsukamoto, Tatsuya Amano, Akihito Hiromori, Hirozumi Yamaguchi and Teruo Higashino, "Road Segment Re-Identification in Dashcam Videos", Proceedings of the 14th International Workshop on Selected Topics in Mobile and Wireless Computing (STWiMob 2021), pp. 19-24
- Yukihiro Tsukamoto, Masahiro Ishizaki, Akihito Hiromori, Hirozumi Yamaguchi and Teruo Higashino, "Multi-Lane Detection and Tracking Using Vision for Traffic Situation Awareness", Proceedings of the 16th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (IEEE WiMob2020)