Li Junfeng, Zhang Feiyan. Human behavior recognition based on directional weighting local space-time features[J]. Journal of Image and Graphics, 2015, 20(3): 320-331. DOI: 10.11834/jig.20150303.
Human action recognition aims to detect and analyze human behavior intelligently on the basis of information captured by cameras. The applications of this technology include surveillance
video content retrieval robotics
and human-computer interfaces. Human behavior description is a key problem of behavior recognition. To utilize training data fully and to ensure a highly descriptive feature descriptor of behavior
a new human activity recognition method is proposed in this study. First
the brightness gradient was decomposed into three directions () to describe the behavior from different perspectives.Second
the standard visual vocabulary codebooks of the three directions for different behaviors could be obtained by directly constructing a visual vocabulary. Moreover
the standard visual vocabulary codebooks of the three directions for each behavior serve as bases to calculate the corresponding vocabulary distributions of the test video separately. The behavior of the test video might be recognized by using the weighted similarity measure between the standard vocabulary distribution of each behavior and the vocabulary distribution of the test video. The performance was investigated in the KTH and Weizmann action datasets. We obtained an average recognition rate of 96.04% accuracy in the Weizmann action dataset and 96.93% accuracy in the KTH action dataset. Our method could generate a comprehensive and effective representation of action videos. Furthermore
this approach can reduce clustering time by producing the codebooks of each direction. Experimental results show that the proposed method significantly improves action recognition performance and is superior to all available identification methods.