Machine Control Using Radial Basis Value Functions and Inverse State Projection (bibtex)
by S Buck, F Stulp, M Beetz and T Schmitt
Abstract:
Typical real world machine control tasks have some characteristics which makes them difficult to solve: Their state spaces are high-dimensional and continuous, and it may be impossible to reach a satisfying target state by exploration or human control. To overcome these problems, in this paper, we propose (1) to use radial basis functions for value function approximation in continuous space reinforcement learning and (2) the use of learned inverse projection functions for state space exploration. We apply our approach to path planning in dynamic environments and to an aircraft autolanding simulation, and evaluate its performance.
Reference:
Machine Control Using Radial Basis Value Functions and Inverse State Projection (S Buck, F Stulp, M Beetz and T Schmitt), In Proc. of the IEEE Intl. Conf. on Automation, Robotics, Control, and Vision, 2002. 
Bibtex Entry:
@inproceedings{buck_machine_2002,
 author = {S Buck and F Stulp and M Beetz and T Schmitt},
 title = {Machine Control Using Radial Basis Value Functions and Inverse State
	Projection},
 booktitle = {Proc. of the {IEEE} Intl. Conf. on Automation, Robotics, Control,
	and Vision},
 year = {2002},
 abstract = {Typical real world machine control tasks have some characteristics
	which makes them difficult to solve: Their state spaces are high-dimensional
	and continuous, and it may be impossible to reach a satisfying target
	state by exploration or human control. To overcome these problems,
	in this paper, we propose (1) to use radial basis functions for value
	function approximation in continuous space reinforcement learning
	and (2) the use of learned inverse projection functions for state
	space exploration. We apply our approach to path planning in dynamic
	environments and to an aircraft autolanding simulation, and evaluate
	its performance.},
}
Powered by bibtexbrowser
Machine Control Using Radial Basis Value Functions and Inverse State Projection (bibtex)
Machine Control Using Radial Basis Value Functions and Inverse State Projection (bibtex)
by S Buck, F Stulp, M Beetz and T Schmitt
Abstract:
Typical real world machine control tasks have some characteristics which makes them difficult to solve: Their state spaces are high-dimensional and continuous, and it may be impossible to reach a satisfying target state by exploration or human control. To overcome these problems, in this paper, we propose (1) to use radial basis functions for value function approximation in continuous space reinforcement learning and (2) the use of learned inverse projection functions for state space exploration. We apply our approach to path planning in dynamic environments and to an aircraft autolanding simulation, and evaluate its performance.
Reference:
Machine Control Using Radial Basis Value Functions and Inverse State Projection (S Buck, F Stulp, M Beetz and T Schmitt), In Proc. of the IEEE Intl. Conf. on Automation, Robotics, Control, and Vision, 2002. 
Bibtex Entry:
@inproceedings{buck_machine_2002,
 author = {S Buck and F Stulp and M Beetz and T Schmitt},
 title = {Machine Control Using Radial Basis Value Functions and Inverse State
	Projection},
 booktitle = {Proc. of the {IEEE} Intl. Conf. on Automation, Robotics, Control,
	and Vision},
 year = {2002},
 abstract = {Typical real world machine control tasks have some characteristics
	which makes them difficult to solve: Their state spaces are high-dimensional
	and continuous, and it may be impossible to reach a satisfying target
	state by exploration or human control. To overcome these problems,
	in this paper, we propose (1) to use radial basis functions for value
	function approximation in continuous space reinforcement learning
	and (2) the use of learned inverse projection functions for state
	space exploration. We apply our approach to path planning in dynamic
	environments and to an aircraft autolanding simulation, and evaluate
	its performance.},
}
Powered by bibtexbrowser

Publications