KnowRob – Knowledge Processing for Autonomous Personal Robots (bibtex)
by M Tenorth and M Beetz
Abstract:
Mobile household robots need much knowledge about objects, places and actions when performing more and more complex tasks. They must be able to recognize objects, know what they are and how they can be used. We present a practical approach to robot knowledge representation that combines description logics knowledge bases with a rich environment model, data mining and (self-) observation modules. The robot observes itself and humans while executing actions and uses the collected experiences to learn models of action-related concepts grounded in its perception and action system. We demonstrate our approach by learning places that are involved in mobile robot manipulation actions, by locating objects based on their function and by supplying knowledge required for understanding underspecified task descriptions as commonly given by humans.
Reference:
KnowRob – Knowledge Processing for Autonomous Personal Robots (M Tenorth and M Beetz), In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. 
Bibtex Entry:
@inproceedings{tenorth_knowrob_2009,
 author = {M Tenorth and M Beetz},
 title = {{KnowRob} – Knowledge Processing for Autonomous Personal Robots},
 booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
 year = {2009},
 pages = {4261–4266},
 abstract = {Mobile household robots need much knowledge about objects, places
	and actions when performing more and more complex tasks. They must
	be able to recognize objects, know what they are and how they can
	be used. We present a practical approach to robot knowledge representation
	that combines description logics knowledge bases with a rich environment
	model, data mining and (self-) observation modules. The robot observes
	itself and humans while executing actions and uses the collected
	experiences to learn models of action-related concepts grounded in
	its perception and action system. We demonstrate our approach by
	learning places that are involved in mobile robot manipulation actions,
	by locating objects based on their function and by supplying knowledge
	required for understanding underspecified task descriptions as commonly
	given by humans.},
}
Powered by bibtexbrowser
KnowRob – Knowledge Processing for Autonomous Personal Robots (bibtex)
KnowRob – Knowledge Processing for Autonomous Personal Robots (bibtex)
by M Tenorth and M Beetz
Abstract:
Mobile household robots need much knowledge about objects, places and actions when performing more and more complex tasks. They must be able to recognize objects, know what they are and how they can be used. We present a practical approach to robot knowledge representation that combines description logics knowledge bases with a rich environment model, data mining and (self-) observation modules. The robot observes itself and humans while executing actions and uses the collected experiences to learn models of action-related concepts grounded in its perception and action system. We demonstrate our approach by learning places that are involved in mobile robot manipulation actions, by locating objects based on their function and by supplying knowledge required for understanding underspecified task descriptions as commonly given by humans.
Reference:
KnowRob – Knowledge Processing for Autonomous Personal Robots (M Tenorth and M Beetz), In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. 
Bibtex Entry:
@inproceedings{tenorth_knowrob_2009,
 author = {M Tenorth and M Beetz},
 title = {{KnowRob} – Knowledge Processing for Autonomous Personal Robots},
 booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
 year = {2009},
 pages = {4261–4266},
 abstract = {Mobile household robots need much knowledge about objects, places
	and actions when performing more and more complex tasks. They must
	be able to recognize objects, know what they are and how they can
	be used. We present a practical approach to robot knowledge representation
	that combines description logics knowledge bases with a rich environment
	model, data mining and (self-) observation modules. The robot observes
	itself and humans while executing actions and uses the collected
	experiences to learn models of action-related concepts grounded in
	its perception and action system. We demonstrate our approach by
	learning places that are involved in mobile robot manipulation actions,
	by locating objects based on their function and by supplying knowledge
	required for understanding underspecified task descriptions as commonly
	given by humans.},
}
Powered by bibtexbrowser
publications

Publications