Learning Situation Dependent Success Rates Of Actions In A RoboCup Scenario (bibtex)
by S Buck and M Riedmiller
Abstract:
A quickly changing, not predictable environment complicates autonomous decision making in a system of mobile robots. To simplify action selection we suggest a suitable reduction of decision space by restricting the number of executable actions the agent can choose from. We use supervised neural learning to automaticly learn success rates of actions to facilitate decision making. To determine probabilities of success each agent relies on its sensory data. We show that using our approach it is possible to compute probabilities of success close to the real success rates of actions and further we give a few results of games of a RoboCup simulation team based on this approach.
Reference:
Learning Situation Dependent Success Rates Of Actions In A RoboCup Scenario (S Buck and M Riedmiller), In Pacific Rim International Conference on Artificial Intelligence, 2000.
Bibtex Entry:
@inproceedings{buck_learning_2000, author = {S Buck and M Riedmiller}, title = {Learning Situation Dependent Success Rates Of Actions In A {RoboCup} Scenario}, booktitle = {Pacific Rim International Conference on Artificial Intelligence}, year = {2000}, pages = {809}, abstract = {A quickly changing, not predictable environment complicates autonomous decision making in a system of mobile robots. To simplify action selection we suggest a suitable reduction of decision space by restricting the number of executable actions the agent can choose from. We use supervised neural learning to automaticly learn success rates of actions to facilitate decision making. To determine probabilities of success each agent relies on its sensory data. We show that using our approach it is possible to compute probabilities of success close to the real success rates of actions and further we give a few results of games of a {RoboCup} simulation team based on this approach.}, }
Powered by bibtexbrowser
Go Back