Direkt zum Inhalt springen
Image Understanding and Knowledge-Based Systems
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX

Image Understanding and Knowledge-Based Systems

Boltzmannstrasse 3
85748 Garching

info@iuks.in.tum.de




Human Action Recognition using Global Point Feature Histograms and Action Shapes (bibtex)
Human Action Recognition using Global Point Feature Histograms and Action Shapes (bibtex)
by RB Rusu, J Bandouch, F Meier, I Essa and M Beetz
Abstract:
This article investigates the recognition of human actions from 3D point clouds that encode the motions of people acting in sensor-distributed indoor environments. Data streams are time-sequences of silhouettes extracted from cameras in the environment. From the 2D silhouette contours we generate space-time streams by continuously aligning and stacking the contours along the time axis as third spatial dimension. The space-time stream of an observation sequence is segmented into parts corresponding to subactions using a pattern matching technique based on suffix trees and interval scheduling. Then, the segmented space-time shapes are processed by treating the shapes as 3D point clouds and estimating global point feature histograms for them. The resultant models are clustered using statistical analysis, and our experimental results indicate that the presented methods robustly derive different action classes. This holds despite large intra-class variance in the recorded datasets due to performances from different persons at different time intervals.
Reference:
Human Action Recognition using Global Point Feature Histograms and Action Shapes (RB Rusu, J Bandouch, F Meier, I Essa and M Beetz), In Advanced Robotics journal, Robotics Society of Japan (RSJ), 2009. 
Bibtex Entry:
@article{rusu_human_2009,
 author = {RB Rusu and J Bandouch and F Meier and I Essa and M Beetz},
 title = {Human Action Recognition using Global Point Feature Histograms and
	Action Shapes},
 journal = {Advanced Robotics journal, Robotics Society of Japan ({RSJ)}},
 year = {2009},
 abstract = {This article investigates the recognition of human actions from {3D}
	point clouds that encode the motions of people acting in sensor-distributed
	indoor environments. Data streams are time-sequences of silhouettes
	extracted from cameras in the environment. From the {2D} silhouette
	contours we generate space-time streams by continuously aligning
	and stacking the contours along the time axis as third spatial dimension.
	The space-time stream of an observation sequence is segmented into
	parts corresponding to subactions using a pattern matching technique
	based on suffix trees and interval scheduling. Then, the segmented
	space-time shapes are processed by treating the shapes as {3D} point
	clouds and estimating global point feature histograms for them. The
	resultant models are clustered using statistical analysis, and our
	experimental results indicate that the presented methods robustly
	derive different action classes. This holds despite large intra-class
	variance in the recorded datasets due to performances from different
	persons at different time intervals.},
}
Powered by bibtexbrowser
Go Back

Rechte Seite

Informatik IX

Image Understanding and Knowledge-Based Systems

Boltzmannstrasse 3
85748 Garching

info@iuks.in.tum.de