Deep learning and reinforcement learning methods allow the robot arm to learn autonomously. Multi-object manipulation tasks based on vision sensors (such as object manipulation/placement and parts assembly) effectively reduce hardware and system integration costs. The model can also be trained using samples in the simulation environment and then transferred to the real environment, reducing on-site debugging overheads. This technology enhances the flexibility of robots in industrial scenarios such as customized product assembly line in manufacturing, and multi-category object sorting system in logistics.
By analyzing 3D visual data, our system can accurately estimate the 6D-pose of stacked objects in a complex environment. With the collision detection and motion planning algorithm, the system can guide the robot manipulator to grasp stacked object in a specified way. This technology can be applied to areas such as industrial flexible assembly, machine tending, logistic order picking, palletization and depalletizion.
Leveraging the robot simulation platform to flexibly modify the experimental environment allows fast data collection, which helps the development and evaluation of learning-based autonomous grasping algorithms. It is implemented with modular structure so that the key module can be updated or replaced according to the requirements. The key data recorded in the simulation platform can be saved for further use.