Accurate and fast identification of all pedestrians, motor vehicles, and non-motor vehicles in the scenes of various complex road environments.
Accurate analysis of the attributes and characteristics of vehicles in the scene, including motion status, direction, trajectory, and light signals.
Accurate identification of the attributes and characteristics of pedestrians in the scene, including movements and body orientation.
Accurate and fast recognition of the attributes of different roads and lanes in complex road environments.
Accurate and fast perception of the scene for pixel-level semantic symbols so as to achieve scene modelling.
Accurate and fast recognition of traffic signs and lights in complex road environments and understanding their meanings.
Leveraging mainly visual information and integrating multiple low-cost sensor solutions to achieve highly accurate real-time positioning in large-scale urban scenes.
Achieving 3D geometric reconstruction and texture mapping for large-scale urban scenes by using multi-perspective video, radar, satellite positioning, and inertial navigation systems to provide high-quality 3D map data for autonomous driving.
Leveraging accurate sensor results to make scientific and humanized decision, plan reasonable driving routes, and provide comfortable vehicle control functions.
Fast deployment of neural networks on FPGA platforms with efficient model compression and acceleration technology to achieve highly-flexible, low-cost autonomous driving technology.