Learning Action Models for the Improved Execution of Navigation Plans (bibtex)
by T Belker, M Beetz and A Cremers
Abstract:
Most state-of-the-art navigation systems for autonomous service robots decompose navigation into global navigation planning and local reactive navigation. While the methods for navigation planning and local navigation themselves are well understood, the plan execution problem, the problem of how to generate and parameterize local navigation tasks from a given navigation plan, is largely unsolved. This article describes how a robot can autonomously learn to execute navigation plans. We formalize the problem as a Markov Decision Process (MDP) and derive a decision theoretic action selection function from it. The action selection function employs models of the robot's navigation actions, which are autonomously acquired from experience using neural network or regression tree learning algorithms. We show, both in simulation and on a RWI B21 mobile robot, that the learned models together with the derived action selection function achieve competent navigation behavior.
Reference:
Learning Action Models for the Improved Execution of Navigation Plans (T Belker, M Beetz and A Cremers), In Robotics and Autonomous Systems, volume 38, 2002. 
Bibtex Entry:
@article{belker_learning_2002,
 author = {T Belker and M Beetz and A Cremers},
 title = {Learning Action Models for the Improved Execution of Navigation Plans},
 journal = {Robotics and Autonomous Systems},
 year = {2002},
 volume = {38},
 pages = {137–148},
 number = {3–4},
 month = {mar},
 abstract = {Most state-of-the-art navigation systems for autonomous service robots
	decompose navigation into global navigation planning and local reactive
	navigation. While the methods for navigation planning and local navigation
	themselves are well understood, the plan execution problem, the problem
	of how to generate and parameterize local navigation tasks from a
	given navigation plan, is largely unsolved. This article describes
	how a robot can autonomously learn to execute navigation plans. We
	formalize the problem as a Markov Decision Process ({MDP)} and derive
	a decision theoretic action selection function from it. The action
	selection function employs models of the robot's navigation actions,
	which are autonomously acquired from experience using neural network
	or regression tree learning algorithms. We show, both in simulation
	and on a {RWI} B21 mobile robot, that the learned models together
	with the derived action selection function achieve competent navigation
	behavior.},
}
Powered by bibtexbrowser
Learning Action Models for the Improved Execution of Navigation Plans (bibtex)
Learning Action Models for the Improved Execution of Navigation Plans (bibtex)
by T Belker, M Beetz and A Cremers
Abstract:
Most state-of-the-art navigation systems for autonomous service robots decompose navigation into global navigation planning and local reactive navigation. While the methods for navigation planning and local navigation themselves are well understood, the plan execution problem, the problem of how to generate and parameterize local navigation tasks from a given navigation plan, is largely unsolved. This article describes how a robot can autonomously learn to execute navigation plans. We formalize the problem as a Markov Decision Process (MDP) and derive a decision theoretic action selection function from it. The action selection function employs models of the robot's navigation actions, which are autonomously acquired from experience using neural network or regression tree learning algorithms. We show, both in simulation and on a RWI B21 mobile robot, that the learned models together with the derived action selection function achieve competent navigation behavior.
Reference:
Learning Action Models for the Improved Execution of Navigation Plans (T Belker, M Beetz and A Cremers), In Robotics and Autonomous Systems, volume 38, 2002. 
Bibtex Entry:
@article{belker_learning_2002,
 author = {T Belker and M Beetz and A Cremers},
 title = {Learning Action Models for the Improved Execution of Navigation Plans},
 journal = {Robotics and Autonomous Systems},
 year = {2002},
 volume = {38},
 pages = {137–148},
 number = {3–4},
 month = {mar},
 abstract = {Most state-of-the-art navigation systems for autonomous service robots
	decompose navigation into global navigation planning and local reactive
	navigation. While the methods for navigation planning and local navigation
	themselves are well understood, the plan execution problem, the problem
	of how to generate and parameterize local navigation tasks from a
	given navigation plan, is largely unsolved. This article describes
	how a robot can autonomously learn to execute navigation plans. We
	formalize the problem as a Markov Decision Process ({MDP)} and derive
	a decision theoretic action selection function from it. The action
	selection function employs models of the robot's navigation actions,
	which are autonomously acquired from experience using neural network
	or regression tree learning algorithms. We show, both in simulation
	and on a {RWI} B21 mobile robot, that the learned models together
	with the derived action selection function achieve competent navigation
	behavior.},
}
Powered by bibtexbrowser

Publications