FERCaps: A Capsule-Based Method for Face Expression Recognition from Frontal Face Images

QI-DI HU, QIAN SHU, MING-ZE BAI, XIAO-MING YAO, KUN-XIAN SHU

Abstract


A novel method which is named FERCaps (capsule-based method for facial expression recognition) is presented in this paper. We process these images containing the faces, and segment these gray images only containing the front faces for training the model. The model framework consists three layers of convolutional neural networks (CNN), two layers of capsules, and a decoder with a fully connected layer and four deconvolution. The ReLU activation function is used to speed up the experimental model training speed in the convolution layers and constructed a decoder suitable for facial expressions using deconvolution. The original capsule is the bottom layer of the multidimensional entity. Through an iterative routing process, each raw capsule selects a corresponding high-level capsule in the upper level. Instantiated parameters can characterize facial expressions in high-level capsules. We verified the effectiveness of the model through experiments on the public benchmarking datasets JAFFE and extended Cohn-Kanade (CK+), and achieved an accuracy of 98.18% in ck+ and 88.33% in JAFFE

Keywords


Facial expression recognition, convolutional neural networks, capsule, deconvolution.Text


DOI
10.12783/dteees/peems2019/34027

Full Text:

PDF

Refbacks

  • There are currently no refbacks.