Facial Expression Recognition for Human-robot Interaction – A Prototype (bibtex)
by M Wimmer, BA. MacDonald, D Jayamuni and A Yadav
Abstract:
To be effective in the human world robots must respond to human emotional states. This paper focuses on the recognition of the six universal human facial expressions. In the last decade there has been successful research on facial expression recognition (FER) in controlled conditions suitable for human-computer interaction. However the human-robot scenario presents additional challenges including a lack of control over lighting conditions and over the relative poses and separation of the robot and human, the inherent mobility of robots, and stricter real time computational requirements dictated by the need for robots to respond in a timely fashion. Our approach imposes lower computational requirements by specifically adapting model-based techniques to the FER scenario. It contains adaptive skin color extraction, localization of the entire face and facial components, and specifically learned objective functions for fitting a deformable face model. Experimental evaluation reports a recognition rate of 70% on the Cohn-Kanade facial expression database, and 67% in a robot scenario, which compare well to other FER systems.
Reference:
Facial Expression Recognition for Human-robot Interaction – A Prototype (M Wimmer, BA. MacDonald, D Jayamuni and A Yadav), In 2\textbackslashtextsuperscriptnd Workshop Robot Vision. Lecture Notes in Computer Science. (R Klette, G Sommer, eds.), Springer, volume 4931/2008, 2008. 
Bibtex Entry:
@inproceedings{wimmer_facial_2008,
 author = {M Wimmer and BA. MacDonald and D Jayamuni and A Yadav},
 title = {Facial Expression Recognition for Human-robot Interaction – A Prototype},
 booktitle = {2{\textbackslash}textsuperscriptnd Workshop Robot Vision. Lecture
	Notes in Computer Science.},
 year = {2008},
 editor = {Klette, Reinhard and Sommer, Gerald},
 volume = {4931/2008},
 pages = {139--152},
 address = {Auckland, New Zealand},
 month = {feb},
 publisher = {Springer},
 abstract = {To be effective in the human world robots must respond to human emotional
	states. This paper focuses on the recognition of the six universal
	human facial expressions. In the last decade there has been successful
	research on facial expression recognition ({FER)} in controlled conditions
	suitable for human-computer interaction. However the human-robot
	scenario presents additional challenges including a lack of control
	over lighting conditions and over the relative poses and separation
	of the robot and human, the inherent mobility of robots, and stricter
	real time computational requirements dictated by the need for robots
	to respond in a timely fashion. Our approach imposes lower computational
	requirements by specifically adapting model-based techniques to the
	{FER} scenario. It contains adaptive skin color extraction, localization
	of the entire face and facial components, and specifically learned
	objective functions for fitting a deformable face model. Experimental
	evaluation reports a recognition rate of 70\% on the Cohn-Kanade
	facial expression database, and 67\% in a robot scenario, which compare
	well to other {FER} systems.},
 keywords = {facial expressions},
}
Powered by bibtexbrowser
Facial Expression Recognition for Human-robot Interaction – A Prototype (bibtex)
Facial Expression Recognition for Human-robot Interaction – A Prototype (bibtex)
by M Wimmer, BA. MacDonald, D Jayamuni and A Yadav
Abstract:
To be effective in the human world robots must respond to human emotional states. This paper focuses on the recognition of the six universal human facial expressions. In the last decade there has been successful research on facial expression recognition (FER) in controlled conditions suitable for human-computer interaction. However the human-robot scenario presents additional challenges including a lack of control over lighting conditions and over the relative poses and separation of the robot and human, the inherent mobility of robots, and stricter real time computational requirements dictated by the need for robots to respond in a timely fashion. Our approach imposes lower computational requirements by specifically adapting model-based techniques to the FER scenario. It contains adaptive skin color extraction, localization of the entire face and facial components, and specifically learned objective functions for fitting a deformable face model. Experimental evaluation reports a recognition rate of 70% on the Cohn-Kanade facial expression database, and 67% in a robot scenario, which compare well to other FER systems.
Reference:
Facial Expression Recognition for Human-robot Interaction – A Prototype (M Wimmer, BA. MacDonald, D Jayamuni and A Yadav), In 2\textbackslashtextsuperscriptnd Workshop Robot Vision. Lecture Notes in Computer Science. (R Klette, G Sommer, eds.), Springer, volume 4931/2008, 2008. 
Bibtex Entry:
@inproceedings{wimmer_facial_2008,
 author = {M Wimmer and BA. MacDonald and D Jayamuni and A Yadav},
 title = {Facial Expression Recognition for Human-robot Interaction – A Prototype},
 booktitle = {2{\textbackslash}textsuperscriptnd Workshop Robot Vision. Lecture
	Notes in Computer Science.},
 year = {2008},
 editor = {Klette, Reinhard and Sommer, Gerald},
 volume = {4931/2008},
 pages = {139--152},
 address = {Auckland, New Zealand},
 month = {feb},
 publisher = {Springer},
 abstract = {To be effective in the human world robots must respond to human emotional
	states. This paper focuses on the recognition of the six universal
	human facial expressions. In the last decade there has been successful
	research on facial expression recognition ({FER)} in controlled conditions
	suitable for human-computer interaction. However the human-robot
	scenario presents additional challenges including a lack of control
	over lighting conditions and over the relative poses and separation
	of the robot and human, the inherent mobility of robots, and stricter
	real time computational requirements dictated by the need for robots
	to respond in a timely fashion. Our approach imposes lower computational
	requirements by specifically adapting model-based techniques to the
	{FER} scenario. It contains adaptive skin color extraction, localization
	of the entire face and facial components, and specifically learned
	objective functions for fitting a deformable face model. Experimental
	evaluation reports a recognition rate of 70\% on the Cohn-Kanade
	facial expression database, and 67\% in a robot scenario, which compare
	well to other {FER} systems.},
 keywords = {facial expressions},
}
Powered by bibtexbrowser
publications

Publications