Direkt zum Inhalt springen
Image Understanding and Knowledge-Based Systems
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX

Image Understanding and Knowledge-Based Systems

Boltzmannstrasse 3
85748 Garching

info@iuks.in.tum.de




Tailoring Robot Actions to Task Contexts using Action Models (bibtex)
Tailoring Robot Actions to Task Contexts using Action Models (bibtex)
by F Stulp
Abstract:
In motor control, high-level goals must be expressed in terms of low-level motor commands. An effective approach to bridge this gap, widespread in both nature and robotics, is to acquire a set of temporally extended actions, each designed for specific goals and task contexts. An action selection module then selects the appropriate action in a given situation. In this approach, high-level goals are mapped to actions, and actions produce streams of motor commands. The first mapping is often ambiguous, as several actions or action parameterizations can achieve the same goal. Instead of choosing an arbitrary action or parameterization, the robot should select those that best fulfill some pre-specified requirement, such as minimal execution duration, successful execution, or coordination of actions with others. The key to being able to perform this selection lies in prediction. By predicting the performance of different actions and action parameterizations, the robot can also predict which of them best meets the requirement. Action models, which have many similarities with human forward models, enable robots to make such predictions. In this dissertation, we introduce a computational model for the acquisition and application of action models. Robots first learn action models from observed experience, and then use them to optimize their performance with the following methods: 1) \textbackslashemphSubgoal refinement, which enables robots to optimize actions in action sequences by predicting which action parameterization leads to the best performance. 2) \textbackslashemphCondition refinement and \textbackslashemphsubgoal assertion, with which robots can adapt existing actions to novel task contexts and goals by predicting when action execution will fail. 3) \textbackslashemphImplicit coordination, in which multiple robots globally coordinate their actions, by locally making predictions about the performance of other robots. The acquisition and applications of action models have been realized and empirically evaluated in three robotic domains: the \textbackslashpioneer robots of our RoboCup mid-size league team, a simulated B21 in a kitchen environment, and a PowerCube robot arm. The main principle behind this approach is that in robot controller design, knowledge that robots learn themselves from observed experience complements well the abstract knowledge that humans specify.
Reference:
Tailoring Robot Actions to Task Contexts using Action Models (F Stulp), PhD thesis, Technische Universität München, 2007. 
Bibtex Entry:
@phdthesis{stulp_tailoring_2007,
 author = {F Stulp},
 title = {Tailoring Robot Actions to Task Contexts using Action Models},
 school = {Technische Universität München},
 year = {2007},
 abstract = {In motor control, high-level goals must be expressed in terms of low-level
	motor commands. An effective approach to bridge this gap, widespread
	in both nature and robotics, is to acquire a set of temporally extended
	actions, each designed for specific goals and task contexts. An action
	selection module then selects the appropriate action in a given situation.
	In this approach, high-level goals are mapped to actions, and actions
	produce streams of motor commands. The first mapping is often ambiguous,
	as several actions or action parameterizations can achieve the same
	goal. Instead of choosing an arbitrary action or parameterization,
	the robot should select those that best fulfill some pre-specified
	requirement, such as minimal execution duration, successful execution,
	or coordination of actions with others. The key to being able to
	perform this selection lies in prediction. By predicting the performance
	of different actions and action parameterizations, the robot can
	also predict which of them best meets the requirement. Action models,
	which have many similarities with human forward models, enable robots
	to make such predictions. In this dissertation, we introduce a computational
	model for the acquisition and application of action models. Robots
	first learn action models from observed experience, and then use
	them to optimize their performance with the following methods: 1)
	{\textbackslash}{emphSubgoal} refinement, which enables robots to
	optimize actions in action sequences by predicting which action parameterization
	leads to the best performance. 2) {\textbackslash}{emphCondition}
	refinement and {\textbackslash}emphsubgoal assertion, with which
	robots can adapt existing actions to novel task contexts and goals
	by predicting when action execution will fail. 3) {\textbackslash}{emphImplicit}
	coordination, in which multiple robots globally coordinate their
	actions, by locally making predictions about the performance of other
	robots. The acquisition and applications of action models have been
	realized and empirically evaluated in three robotic domains: the
	{\textbackslash}pioneer robots of our {RoboCup} mid-size league team,
	a simulated B21 in a kitchen environment, and a {PowerCube} robot
	arm. The main principle behind this approach is that in robot controller
	design, knowledge that robots learn themselves from observed experience
	complements well the abstract knowledge that humans specify.},
 url = {http://mediatum2.ub.tum.de/node?id=617105},
}
Powered by bibtexbrowser
Go Back

Rechte Seite

Informatik IX

Image Understanding and Knowledge-Based Systems

Boltzmannstrasse 3
85748 Garching

info@iuks.in.tum.de