Learning Structured Reactive Navigation Plans from Executing MDP policies (bibtex)
by M Beetz and T Belker
Abstract:
Autonomous robots, such as robot office couriers, need navigation routines that support flexible task execution and effective action planning. This paper describes XfrmLearn, a system that learns structured symbolic navigation plans. Given a navigation task, XfrmLearn learns to structure continuous navigation behavior and represents the learned structure as compact and transparent plans. The structured plans are obtained by starting with monolithic default plans that are optimized for average performance and adding subplans to improve the navigation performance for the given task. Compactness is achieved by incorporating only subplans that achieve significant performance gains. The resulting plans support action planning and opportunistic task execution. XfrmLearn is implemented and extensively evaluated on an autonomous mobile robot.
Reference:
Learning Structured Reactive Navigation Plans from Executing MDP policies (M Beetz and T Belker), In Proceedings of the 5th International Conference on Autonomous Agents, 2001. 
Bibtex Entry:
@inproceedings{beetz_learning_2001,
 author = {M Beetz and T Belker},
 title = {Learning Structured Reactive Navigation Plans from Executing {MDP}
	policies},
 booktitle = {Proceedings of the 5th International Conference on Autonomous Agents},
 year = {2001},
 pages = {19–20},
 abstract = {Autonomous robots, such as robot office couriers, need navigation
	routines that support flexible task execution and effective action
	planning. This paper describes {XfrmLearn}, a system that learns
	structured symbolic navigation plans. Given a navigation task, {XfrmLearn}
	learns to structure continuous navigation behavior and represents
	the learned structure as compact and transparent plans. The structured
	plans are obtained by starting with monolithic default plans that
	are optimized for average performance and adding subplans to improve
	the navigation performance for the given task. Compactness is achieved
	by incorporating only subplans that achieve significant performance
	gains. The resulting plans support action planning and opportunistic
	task execution. {XfrmLearn} is implemented and extensively evaluated
	on an autonomous mobile robot.},
}
Powered by bibtexbrowser
Learning Structured Reactive Navigation Plans from Executing MDP policies (bibtex)
Learning Structured Reactive Navigation Plans from Executing MDP policies (bibtex)
by M Beetz and T Belker
Abstract:
Autonomous robots, such as robot office couriers, need navigation routines that support flexible task execution and effective action planning. This paper describes XfrmLearn, a system that learns structured symbolic navigation plans. Given a navigation task, XfrmLearn learns to structure continuous navigation behavior and represents the learned structure as compact and transparent plans. The structured plans are obtained by starting with monolithic default plans that are optimized for average performance and adding subplans to improve the navigation performance for the given task. Compactness is achieved by incorporating only subplans that achieve significant performance gains. The resulting plans support action planning and opportunistic task execution. XfrmLearn is implemented and extensively evaluated on an autonomous mobile robot.
Reference:
Learning Structured Reactive Navigation Plans from Executing MDP policies (M Beetz and T Belker), In Proceedings of the 5th International Conference on Autonomous Agents, 2001. 
Bibtex Entry:
@inproceedings{beetz_learning_2001,
 author = {M Beetz and T Belker},
 title = {Learning Structured Reactive Navigation Plans from Executing {MDP}
	policies},
 booktitle = {Proceedings of the 5th International Conference on Autonomous Agents},
 year = {2001},
 pages = {19–20},
 abstract = {Autonomous robots, such as robot office couriers, need navigation
	routines that support flexible task execution and effective action
	planning. This paper describes {XfrmLearn}, a system that learns
	structured symbolic navigation plans. Given a navigation task, {XfrmLearn}
	learns to structure continuous navigation behavior and represents
	the learned structure as compact and transparent plans. The structured
	plans are obtained by starting with monolithic default plans that
	are optimized for average performance and adding subplans to improve
	the navigation performance for the given task. Compactness is achieved
	by incorporating only subplans that achieve significant performance
	gains. The resulting plans support action planning and opportunistic
	task execution. {XfrmLearn} is implemented and extensively evaluated
	on an autonomous mobile robot.},
}
Powered by bibtexbrowser
publications

Publications