- Core Technology
- Based on the foundation of proprietary technologies and a core “brain” built on a deep learning platform,
SenseTime has rapidly opened up AI application in multiple vertical scenarios.
- Technical Capabilities
- 01Camera Perception
- 02LiDAR Perception
- 03Multi-Sensor Fusion
- 04Behavior Prediction of Vehicle / Pedestrian / Bicycle
- 05Path Planning, Decision Making and Control
- 06HD Map and Localization
- 07Model Deployment with Mainstream Chips
01 / 07
With SenseTime’s experience in computer vision, the camera perception technology can detect lane lines, roadsides, drivable areas, vehicles, pedestrians, traffic signs and traffic lights accurately through a monocular camera.
02 / 07
Our LiDAR perception algorithm supports different types of LiDAR and application scenarios. It provides accurate detection and tracking of traffic participants and unknown objects under various scenarios such as autonomous driving and V2X.
03 / 07
Our multi-sensor fusion system supports different combinations of different sensors to yield better results in lower latency, higher-precision and fault tolerant.
04 / 07
Behavior Prediction of Vehicle / Pedestrian / Bicycle
It can accurately predict behaviors of vehicles, pedestrians or bicycles in complex traffic scenes, which include the intention of turning, lane changing and road crossing, and the awareness of surrounding environments, as well as multiple potential trajectories. It provides reliable information for a smarter planning and decision-making for the autonomous driving.
05 / 07
Path Planning, Decision Making and Control
By integrating multiple modules, vehicle movements and surrounding environment, the technology enables a safe, smart and smooth decision-making and path planning process under complex driving scenarios, while providing accurate control of the vehicle.
06 / 07
HD Map and Localization
We can create high definition 3D maps at city scale with multi-sensor fusion technologies, which include point cloud map, localization map, semantic map, routing map. In addition, based on the HD maps, we provide high-precision localization in real time.
07 / 07
Model Deployment with Mainstream Chips
Based on SenseParrots, our proprietary AI deep learning platform, the self-developed FPGA deployment toolchain and hardware accelerator being employed, our algorithm can be applied to a variety of mainstream SoCs with great flexibility and ease. This will promote the mass production of autonomous driving technology.
- Camera Perception
- LiDAR Perception
- Multi-Sensor Fusion
- Behavior Prediction of Vehicle / Pedestrian / Bicycle
- Path Planning, Decision Making and Control
- HD Map and Localization
- Model Deployment with Mainstream Chips