Adaptive motion planning in bin-picking with object uncertainties
- Publication Type:
- Conference Proceeding
- International Conference on Control, Automation and Systems, 2017, 2017-October pp. 921 - 928
- Issue Date:
Files in This Item:
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is currently unavailable due to the publisher's embargo.
The embargo period expires on 13 Dec 2019
© 2017 Institute of Control, Robotics and Systems - ICROS. Doing motion planning for bin-picking with object uncertainties requires either a re-grasp of picked objects or an online sensor system. Using the latter is advantageous in terms of computational time, as no time is wasted doing an extra pick and place action. It does, however, put extra requirements on the motion planner, as the target position may change on-the-fly. This paper solves that problem by using a state adjusting Partial Observable Markov Decision Process, where the state space is modified between runs, to better fit earlier solved problems. The approach relies on a set of waypoints, containing information about which parts of the state space may contain feasible solutions. Waypoints are pushed around the state space by observing which states in the neighborhood lead to successfully solved problems. Two bin-picking scenarios are modeled with the proposed method. One scenario in which the system receives an object pose update while moving towards the place position. Another where the update includes the object type being grasped out of a fixed number of options, each class to be deposited in a different place. When an online POMDP solver is utilized, the state adjusting POMDP is improving performance by up to 28% on execution times compared to a not adjusted POMDP.
Please use this identifier to cite or link to this item: