Learning to shoot goals, Analysing the Learning Process and the Resulting Policies (bibtex)
by M Geipel and M Beetz
Abstract:
Reinforcement learning is a very general unsupervised learning mechanism. Due to its generality reinforcement learning does not scale very well for tasks that involve inferring subtasks. In particular when the subtasks are dynamically changing and the environment is adversarial. One of the most challenging reinforcement learning tasks so far has been the 3 to 2 keepaway task in the RoboCup simulation league. In this paper we apply reinforcement learning to a even more challenging task: attacking the opponents goal. The main contribution of this paper is the empirical analysis of a portfolio of mechanisms for scaling reinforcement learning towards learning attack policies in simulated robot soccer.
Reference:
Learning to shoot goals, Analysing the Learning Process and the Resulting Policies (M Geipel and M Beetz), In RoboCup-2006: Robot Soccer World Cup X (G Lakemeyer, E Sklar, D Sorenti, T Takahashi, eds.), Springer Verlag, Berlin, 2006. (to be published)
Bibtex Entry:
@inproceedings{geipel_learning_2006,
 author = {M Geipel and M Beetz},
 title = {Learning to shoot goals, Analysing the Learning Process and the Resulting
	Policies},
 booktitle = {{RoboCup-2006:} Robot Soccer World Cup X},
 year = {2006},
 editor = {Lakemeyer, Gerhard and Sklar, Elizabeth and Sorenti, Domenico and
	Takahashi, Tomoichi},
 publisher = {Springer Verlag, Berlin},
 abstract = {Reinforcement learning is a very general unsupervised learning mechanism.
	Due to its generality reinforcement learning does not scale very
	well for tasks that involve inferring subtasks. In particular when
	the subtasks are dynamically changing and the environment is adversarial.
	One of the most challenging reinforcement learning tasks so far has
	been the 3 to 2 keepaway task in the {RoboCup} simulation league.
	In this paper we apply reinforcement learning to a even more challenging
	task: attacking the opponents goal. The main contribution of this
	paper is the empirical analysis of a portfolio of mechanisms for
	scaling reinforcement learning towards learning attack policies in
	simulated robot soccer.},
}
Powered by bibtexbrowser
Learning to shoot goals, Analysing the Learning Process and the Resulting Policies (bibtex)
Learning to shoot goals, Analysing the Learning Process and the Resulting Policies (bibtex)
by M Geipel and M Beetz
Abstract:
Reinforcement learning is a very general unsupervised learning mechanism. Due to its generality reinforcement learning does not scale very well for tasks that involve inferring subtasks. In particular when the subtasks are dynamically changing and the environment is adversarial. One of the most challenging reinforcement learning tasks so far has been the 3 to 2 keepaway task in the RoboCup simulation league. In this paper we apply reinforcement learning to a even more challenging task: attacking the opponents goal. The main contribution of this paper is the empirical analysis of a portfolio of mechanisms for scaling reinforcement learning towards learning attack policies in simulated robot soccer.
Reference:
Learning to shoot goals, Analysing the Learning Process and the Resulting Policies (M Geipel and M Beetz), In RoboCup-2006: Robot Soccer World Cup X (G Lakemeyer, E Sklar, D Sorenti, T Takahashi, eds.), Springer Verlag, Berlin, 2006. (to be published)
Bibtex Entry:
@inproceedings{geipel_learning_2006,
 author = {M Geipel and M Beetz},
 title = {Learning to shoot goals, Analysing the Learning Process and the Resulting
	Policies},
 booktitle = {{RoboCup-2006:} Robot Soccer World Cup X},
 year = {2006},
 editor = {Lakemeyer, Gerhard and Sklar, Elizabeth and Sorenti, Domenico and
	Takahashi, Tomoichi},
 publisher = {Springer Verlag, Berlin},
 abstract = {Reinforcement learning is a very general unsupervised learning mechanism.
	Due to its generality reinforcement learning does not scale very
	well for tasks that involve inferring subtasks. In particular when
	the subtasks are dynamically changing and the environment is adversarial.
	One of the most challenging reinforcement learning tasks so far has
	been the 3 to 2 keepaway task in the {RoboCup} simulation league.
	In this paper we apply reinforcement learning to a even more challenging
	task: attacking the opponents goal. The main contribution of this
	paper is the empirical analysis of a portfolio of mechanisms for
	scaling reinforcement learning towards learning attack policies in
	simulated robot soccer.},
}
Powered by bibtexbrowser
publications

Publications