Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

Deep Convolutional Neural Networks for Human Action Recognition Using Depth Maps and Postures

Kamel, A, Sheng, B, Yang, P, Li, P, Shen, R and Feng, DD (2018) Deep Convolutional Neural Networks for Human Action Recognition Using Depth Maps and Postures. IEEE Transactions on Systems, Man and Cybernetics: Systems. ISSN 2168-2216

[img]
Preview
Text
Revised-Paper.pdf - Accepted Version

Download (2MB) | Preview

Abstract

In this paper, we present a method (Action-Fusion) for human action recognition from depth maps and posture data using convolutional neural networks (CNNs). Two input descriptors are used for action representation. The first input is a depth motion image that accumulates consecutive depth maps of a human action, whilst the second input is a proposed moving joints descriptor which represents the motion of body joints over time. In order to maximize feature extraction for accurate action classification, three CNN channels are trained with different inputs. The first channel is trained with depth motion images (DMIs), the second channel is trained with both DMIs and moving joint descriptors together, and the third channel is trained with moving joint descriptors only. The action predictions generated from the three CNN channels are fused together for the final action classification. We propose several fusion score operations to maximize the score of the right action. The experiments show that the results of fusing the output of three channels are better than using one channel or fusing two channels only. Our proposed method was evaluated on three public datasets: 1) Microsoft action 3-D dataset (MSRAction3D); 2) University of Texas at Dallas-multimodal human action dataset; and 3) multimodal action dataset (MAD) dataset. The testing results indicate that the proposed approach outperforms most of existing state-of-the-art methods, such as histogram of oriented 4-D normals and Actionlet on MSRAction3D. Although MAD dataset contains a high number of actions (35 actions) compared to existing action RGB-D datasets, this paper surpasses a state-of-the-art method on the dataset by 6.84%.

Item Type: Article
Additional Information: © 2018 IEEE
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Computer Science & Mathematics
Publisher: IEEE
Date Deposited: 10 Oct 2018 08:04
Last Modified: 04 Sep 2021 10:02
DOI or ID number: 10.1109/TSMC.2018.2850149
URI: https://researchonline.ljmu.ac.uk/id/eprint/9438
View Item View Item