泡泡一分钟:Learning Motion Planning Policies in Uncertain Environments through Repeated Task Executions

张宁  Learning Motion Planning Policies in Uncertain Environments through Repeated Task Executionsreact

经过重复任务执行学习不肯定环境中的运动规划策略
连接:https://pan.baidu.com/s/1TlSJn0fXuKEwZ9vts4xA6g
提取码:jwsd
复制这段内容后打开百度网盘手机App,操做更方便哦算法

Florence Tsang, Ryan A. Macdonald, and Stephen L. Smithapp

The ability to navigate uncertain environments from a start to a goal location is a necessity in many applications. While there are many reactive algorithms for online replanning, there has not been much investigation in leveraging past executions of the same navigation task to improve future executions. In this work, we first formalize this problem by introducing the Learned Reactive Planning Problem (LRPP). Second, we propose a method to capture these past executions and from that determine a motion policy to handle obstacles that the robot has seen before. Third, we show from our experiments that using this policy can significantly reduce the execution cost over just using reactive algorithms.学习

在许多应用中,从开始到目标位置导航不肯定环境的能力是必需的。尽管有许多用于在线从新计划的反应算法,可是在利用相同导航任务的过去执行来改进未来的执行方面没有太多调查。在这项工做中,咱们首先经过引入学习反应规划问题(LRPP)来正式化这个问题。 其次,咱们提出了一种方法来捕获这些过去的执行,并从中肯定一个运动策略来处理机器人之前看到的障碍。 第三,咱们从实验中能够看出,使用这种策略能够显着下降执行成本,而不单单是使用反应算法。this

相关文章
相关标签/搜索