PSYCH OpenIR  > 脑与认知科学国家重点实验室
Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View
Liu, Chang Hong1; Chen, Wenfeng2; Ward, James3; Takahashi, Nozomi4
2016-08-08
Source PublicationSCIENTIFIC REPORTS
Correspondent Emailliuc@bournemouth.ac.uk ; chenwf@psych.ac.cn
ISSN2045-2322
SubtypeArticle
Volume6Issue:0Pages:1-8
AbstractPrior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.
DOI10.1038/srep31001
Indexed BySCI ; SSCI
Language英语
Funding OrganizationBritish Academy ; National Natural Science Foundation of China(31371031)
WOS Research AreaScience & Technology - Other Topics
WOS SubjectMultidisciplinary Sciences
WOS IDWOS:000381009600002
WOS HeadingsScience & Technology
WOS KeywordRECOGNIZING MOVING FACES ; RECOGNITION ; MOTION ; IDENTITY ; INFORMATION
Citation statistics
Document Type期刊论文
Identifierhttp://ir.psych.ac.cn/handle/311026/20517
Collection脑与认知科学国家重点实验室
Corresponding AuthorLiu, Chang Hong; Chen, Wenfeng
Affiliation1.Bournemouth Univ, Fac Sci & Technol, Dept Psychol, Talbot Campus Fern Barrow Poole, Poole BH12 5BB, Dorset, England
2.Chinese Acad Sci, Inst Psychol, State Key Lab Brain & Cognit Sci, 16 Lincui Rd, Beijing 100101, Peoples R China
3.Univ Hull, Dept Comp Sci, Cottingham Rd, Kingston Upon Hull HU6 7RX, N Humberside, England
4.Nihon Univ, Grad Sch Literature & Social Sci, Dept Psychol, 3-25-40 Setagaya Ku, Tokyo 1568550, Japan
Recommended Citation
GB/T 7714
Liu, Chang Hong,Chen, Wenfeng,Ward, James,et al. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View[J]. SCIENTIFIC REPORTS,2016,6(0):1-8.
APA Liu, Chang Hong,Chen, Wenfeng,Ward, James,&Takahashi, Nozomi.(2016).Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View.SCIENTIFIC REPORTS,6(0),1-8.
MLA Liu, Chang Hong,et al."Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View".SCIENTIFIC REPORTS 6.0(2016):1-8.
Files in This Item:
File Name/Size DocType Version Access License
Dynamic Emotional Fa(521KB)期刊论文作者接受稿限制开放CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Liu, Chang Hong]'s Articles
[Chen, Wenfeng]'s Articles
[Ward, James]'s Articles
Baidu academic
Similar articles in Baidu academic
[Liu, Chang Hong]'s Articles
[Chen, Wenfeng]'s Articles
[Ward, James]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Liu, Chang Hong]'s Articles
[Chen, Wenfeng]'s Articles
[Ward, James]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.