3-D recognition technology has been attracting attention and developing rapidly in the fields of obstacle detection in automated driving and robotics. A typical sensor used for 3-D recognition is LiDAR (Light Detection And Ranging), which can determine the distance to an object and its shape (3-D point cloud) by using Time of Flight (ToF) between the LiDAR and the object.
Initially, LiDAR was expensive and large, but advances in research and development have made it low-cost and small enough to be integrated into smartphones. In this research, we are working on a method to detect obstacles from a 3D point cloud that operates on a wearable device incorporating a miniaturized LiDAR sensor.
The realization of devices equipped with this technique will have a significant impact on mobility assistance for the visually impaired. Currently, most visually impaired people use white canes or guide dogs or sighted persons to help them get around. However, using white canes limits the range of obstacle detection, making mobility stressful in urban areas, and guide dogs and sighted persons are costly. We believe that this device will enable obstacle detection over a wider area than with a white cane, and will also enable voice notification by analyzing the surrounding environment.
To achieve this, it is necessary to increase the speed of existing methods and remove noise data specific to wearable devices. Most existing object detection methods use high-performance computers, and it is impossible to detect objects on wearable devices, which have inferior processing power. In addition, wearable devices shake when the wearer moves, so some data may not capture the surrounding environment in the direction of movement. Such noise data must be eliminated using inertial sensors mounted on the device. We are currently working on these issues and hope to help realize a barrier-free society.
Published Paper
- Shota Yamada, Hirozumi Yamaguchi and Hamada Rizk, "An Accurate Point Cloud-Based Human Identification Using Micro-Size LiDAR", Proceedings of the 2022 IEEE International Workshop on Pervasive Information Flow Co-located with IEEE PerCom 2022, pp. 569-574