• 表情识别--JAFFE数据集

    2018-04-21 10:57:17
    表情识别--JAFFE数据集 表情识别--JAFFE数据集 表情识别--JAFFE数据集 表情识别--JAFFE数据集
  • 人脸表情识别数据集

    2018-05-16 13:20:09
    人脸表情识别数据集, 人脸识别,表情识别数据集,深度学习
  • 人脸表情识别数据集
  • 使用数据集需要自己签订协议,给作者发送邮件下载,所以由于版权的问题,在这里就只能提供相关的下载地址和方法,并没有直接把数据集放在这 SMIC:http://www.cse.oulu.fi/SMICDatabase SAMM数据集:...


    SMIC: http://www.cse.oulu.fi/SMICDatabase


    CASME 、CASME2、CAS(ME)^2 :http://fu.psych.ac.cn/CASME/casme.php(链接点进去,在右侧有三个数据集的相应栏目) 




    MEGC2019: https://facial-micro-expressiongc.github.io/MEGC2019/


  • 表情识别数据集

    2018-12-27 17:00:27
  • KAGGLE 人脸表情识别FER2013数据集
  • 表情识别数据集整理

    万次阅读 多人点赞 2015-10-28 15:53:24
    1. CK and CK+
      It contains 97 subjects, which posed in a lab situation for the six universal expressions and the neutral expression. Its extension CK+ contains 123 subjects but the new videos were shot in a similar environment.
      Reference: P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops CVPR4HB’10, 2010, pp. 94–101.
      Website: http://www.pitt.edu/~emotion/ck-spread.htm
      Modalities: Visual
      说明: ck只有静态图片,CK+包括视频。表情标签为分类离散值。

    2. JAFFE
      It contains 219 images of 10 Japanese females. However, it has a limited number of samples, subjects and has been created in a lab controlled environment.
      Website: http://www.kasrl.org/jaffe.html
      Modalities: visual
      说明: 只有219张表情图片。表情标签为分类离散值。

    3. HUMAINE Database
      Datafiles containing emotion labels, gesture labels, speech labels and FAPS all readable in ANVI(标签等信息要用ANVI工具才能打开)
      Modalities: Audio+visual + gesture
      Website: http://emotion-research.net/download/pilot-db/
      说明: 下载数据集后里面只有视频,没有标签等信息。

    4. Recola database
      Totally 34 subjects; 14 male, 20 female
      Reference: FABIEN R., ANDREAS S., JUERGEN S., DENIS L.. Introducing the RECOLA multimodal corpus of collaborative and affective interactions[C]//10th IEEE Int’l conf. and workshops on automatic face and gesture recognition. Shanghai, CN: IEEE Press, 2013:1-8.
      Website: http://diuf.unifr.ch/diva/recola/index.html
      Modalities: Audio+visual+ EDA, ECG(生理模态)
      说明: 数据集共34个视频,表情标签为Arousal-Valence的连续值。标签存在csv文件里。

    5. MMI
      The database consists of over 2900 videos and high-resolution still images of 75 subjects. It is fully annotated for the presence of AUs in videos (event coding), and partially coded on frame-level, indicating for each frame whether an AU is in either the neutral, onset, apex or offset phase. A small part was annotated for audio-visual laughters. The database is freely available to the scientific community.
      a) Induced Disgust, Happiness and Surprise: an Addition to the MMI Facial Expression Database
      M. F. Valstar, M. Pantic. Proceedings of Int’l Conf. Language Resources and Evaluation, Workshop on EMOTION. Malta, pp. 65 - 70, May 2010.
      b) Web-based database for facial expression analysis,M. Pantic, M. F. Valstar, R. Rademaker, L. Maat. Proceedings of IEEE Int’l Conf. Multimedia and Expo (ICME’05). Amsterdam, The Netherlands, pp. 317 - 321, July 2005.
      Modalities: visual(视频)
      Website: http://mmifacedb.eu/
      说明: 该数据集很大,全部包括2900个视频,标签主要是AU的标签,标签存在xml文件里。

    6. NVIE 中科大采集的一个数据集
      中科大NVIE数据集包括自发表情库和人为表情库,本实验采用其中的自发表情库。自发表情库是通过特定视频诱发并在三种光照下(正面、左侧、右侧光照)采集的表情库,其中正面光照103人,左侧光照99人,右侧光照103人。每种光照下,每人有六种表情(喜悦、愤怒、哀 伤、恐惧、厌恶、惊奇)中的三种以上,每种表情的平静帧以及最大帧都已挑出
      Reference: WANG Shangfei, LIU Zhilei, LV Siliang, LV Yanpeng, et al. A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference[J]. IEEE Transactions on Multimedia, 2010, 12(7): 682-691.
      Website: http://nvie.ustc.edu.cn/
      Modalities: visual(图片)
      说明: 标签以Excel文件给出,标签包括表情类的强度,如disgust的表情强度。标签还包括Arousal-Valence标签。

    7. RU-FACS database
      This database consists of spontaneous facial expressions from multiple views, with ground truth FACS codes provided by two facial expression experts.
      We have collected data from 100 subjects, 2.5 minutes each. This database constitutes a significant contribution towards the 400-800 minute database recommended in the feasibility study for fully automating FACS. To date we have human FACS coded the upper faces of 20% the subjects.
      Reference: M. S. Bartlett, G. Littlewort, M. G. Frank, C. Lainscsek, I. R. Fasel, and J. R. Movellan, “Automatic recognition of facial actions in spontaneous expressions,” Journal of Multimedia, vol. 1, no. 6, pp. 22–35, 2006. 3, 5
      Website: http://mplab.ucsd.edu/grants/project1/research/rufacs1-dataset.html
      说明: 该数据集的标签是FACS编码的标签(只有部分视频才有标签),目前该数据集还未向研究者公开。

    8. Belfast naturalistic database
      The Belfast database consists of a combination of studio recordings and TV programme grabs labelled with particular expressions. The number of TV clips in this database is sparse
      Modalities: Audio-visual(视频)
      Reference: E. Douglas-Cowie, R. Cowie, and M. Schr¨oder, “A New Emotion Database: Considerations, Sources and Scope,” in ISCAITRW on Speech and Emotion, 2000, pp. 39–44.
      Website: http://sspnet.eu/2010/02/belfast-naturalistic/
      说明: 数据集为视频,视频包括speech的情感识别

    9. GEMEP Corpus
      The GEneva Multimodal Emotion Portrayals (GEMEP) is a collection of audio and video recordings featuring 10 actors portraying 18 affective states, with different verbal contents and different modes of expression.
      Modalities: Audio-visual
      Reference: T. B¨anziger and K. Scherer, “Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) Corpus,” in Blueprint for affective computing: A sourcebook, K. Scherer, T. B¨anziger, and E. Roesch, Eds. Oxford, England: Oxford University Press, 2010
      Website: http://www.affective-sciences.org/gemep
      说明: FERA2011比赛采用此数据集,标签主要是分类。

    10. Paleari
      Reference: M. Paleari, R. Chellali, and B. Huet, “Bimodal emotion recognition,” in Proceeding of the Second International Conference on Social Robotics ICSR’10, 2010, pp. 305–314.

    11. VAM corpus
      The VAM corpus consists of 12 hours of recordings of the German TV talk-show “Vera am Mittag” (Vera at noon). They are segmented into broadcasts, dialogue acts and utterances, respectively. This audio -visual speech corpus contains spontaneous and very emotional speech recorded from unscripted, authentic discussions between the guests of the talk-show
      Modalities: Audio-visual
      Reference: M. Grimm, K. Kroschel, and S. Narayanan, “The Vera am Mittag German audio-visual emotional speech database,” in IEEE International Confernce on Multimedia and Expo ICME’08, 2008, pp. 865–868
      Website: http://emotion-research.net/download/vam
      说明: 该数据集主要是speech视频,标签为连续值,具体包括三个维度:valence (negative vs. positive), activation (calm vs. excited) and dominance (weak vs. strong)。

    12. SSPNet Conflict Corpus(严格意义上不是表情识别数据集)
      The “SSPNet Conflict Corpus” includes 1430 clips (30 seconds each) extracted from 45 political debates televised in Switzerland. The clips are in French
      Modalities: Audio-visual
      Reference: S.Kim, M.Filippone, F.Valente and A.Vinciarelli “Predicting the Conflict Level in Television Political Debates: an Approach Based on Crowdsourcing, Nonverbal Communication and Gaussian Processes“ Proceedings of ACM International Conference on Multimedia, pp. 793-796, 2012.
      Website: http://www.dcs.gla.ac.uk/vincia/?p=270
      说明: 该数据集主要是政治辩论中的视频,标签为conflict level。

    13. Semaine database
      The database contains approximately 240 character conversations, and recording is still ongoing. Currently approximately 80 conversations have been fully annotated for a number of dimensions in a fully continuous way using FeelTrace.
      Website: http://semaine-db.eu/
      Modalities: Audio-visual
      Reference: The SEMAINE database: Annotated multimodal records of emotionally coloured conversations between a person and a limited agent G. Mckeown, M. F. Valstar, R. Cowie, M. Pantic, M. Schroeder. IEEE Transactions on Affective Computing. 3: pp. 5 - 17, Issue 1. April 2012.
      说明: 通过人机对话来触发的视频,标签为连续的情感维度值,不是分类。

    14. AFEW database(Acted Facial Expressions In The Wild)
      Acted Facial Expressions In The Wild (AFEW) is a dynamic temporal facial expressions data corpus consisting of close to real world environment extracted from movies.
      Reference: Abhinav Dhall, Roland Goecke, Simon Lucey, Tom Gedeon, Collecting Large, Richly Annotated Facial-Expression Databases from Movies, IEEE Multimedia 2012.
      Website: https://cs.anu.edu.au/few/AFEW.html
      Modalities: Audio-visual(电影剪辑片断)
      说明: 该数据集的内容为从电影中剪辑的包含表情的视频片段,表情标签为六类基本表情+中性表情,annotation的信息保存在xml文件中。
      AFEW数据集为Emotion Recognition In The Wild Challenge (EmotiW)系列情感识别挑战赛使用的数据集,该比赛从2013开始每年举办一次。

    15. SFEW database(Static Facial Expressions in the Wild)
      Static Facial Expressions in the Wild (SFEW) has been developed by selecting frames from AFEW
      Reference: Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. Static Facial Expressions in Tough Conditions: Data, Evaluation Protocol And Benchmark, First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies BeFIT, IEEE International Conference on Computer Vision ICCV2011, Barcelona, Spain, 6-13 November 2011
      Website: https://cs.anu.edu.au/few/AFEW.html
      Modalities: Visual
      说明: 该数据集是从AFEW数据集中抽取的有表情的静态帧,表情标签为六类基本表情+中性表情,annotation的信息保存在xml文件中。

    16. AVEC系列数据集
      AVEC是从2011开始每一年举办一次的表情识别挑战赛,表情识别的模型主要采用的连续情感模型。其中AVEC2012使用的情感维度为Arousal、Valence、Expectancy、Power; AVEC2013的情感维度为Valence和Arousal;AVEC2014的情感维度Valence、Arousal和Dominance。
      AVEC2013和AVEC2014引入了depression recognition.
      Modalities: Audio-visual
      Reference: Michel Valstar , Björn W. Schuller , Jarek Krajewski , Roddy Cowie , Maja Pantic, AVEC 2014: the 4th international audio/visual emotion challenge and workshop, Proceedings of the ACM International Conference on Multimedia, November 03-07, 2014, Orlando, Florida, USA

    17. LIRIS-ACCEDE数据集
      Discrete LIRIS-ACCEDE - Induced valence and arousal rankings for 9800 short video excerpts extracted from 160 movies. Estimated affective scores are also available.
      Continuous LIRIS-ACCEDE - Continuous induced valence and arousal self-assessments for 30 movies. Post-processed GSR measurements are also available.
      MediaEval 2015 affective impact of movies task - Violence annotations and affective classes for the 9800 excerpts of the discrete LIRIS-ACCEDE part, plus for additional 1100 excerpts used to extend the test set for the MediaEval 2015 affective impact of movies task.
      Modalities: Audio-visual
      Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen, “LIRIS-ACCEDE: A Video Database for Affective Content Analysis,” in IEEE Transactions on Affective Computing, 2015.
      Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen, “Deep Learning vs. Kernel Methods: Performance for Emotion Prediction in Videos,” in 2015 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 2015
      M. Sjöberg, Y. Baveye, H. Wang, V. L. Quang, B. Ionescu, E. Dellandréa, M. Schedl, C.-H. Demarty, and L. Chen, “The mediaeval 2015 affective impact of movies task,” in MediaEval 2015 Workshop, 2015
      说明: 该数据集既有离散的情感数据又有基于维度的情感数据。



  • 表情识别fer2013数据集

    2017-12-29 15:29:38
    表情识别fer2013数据集,将图片从一个文件中提取成一张一张的图片,包含以下几种表情: 0 anger 生气 1 disgust 厌恶 2 fear 恐惧 3 happy 开心 4 sad 伤心 5 surprised 惊讶 6 normal 中性
  • 如题,请访问文档中的地址下载微表情识别CASME2数据集
  • 表情识别--JAFFE数据集 表情识别--JAFFE数据集 表情识别--JAFFE数据集 表情识别--JAFFE数据集
  • 人脸8种动态彩色表情数据集,包括123个subjects, 593 个 image sequence,每...这个数据集是人脸表情识别中比较流行的一个数据集,很多文章都会用到这个数据做测试。 资源包括数据集压缩文件和数据集介绍、公开论文文档
  • 人脸表情识别数据集 JAFFE JAFFE数据集一共有213张图像.选取了10名日本女学生,每个人做出7种表情.7种表情包括: Angry,Disgust,Fear,Happy,Sad,Surprise,Neutral.(愤怒,厌恶,恐惧,高兴,悲伤,惊讶,...
  • ExpW表情数据集1、数据集介绍2、数据集处理3、数据集下载 1、数据集介绍 论文《Deep Facial Expression Recognition: A Survey》里对其进行了介绍: ExpW [47]: The Expression in-the-Wild Database (ExpW) ...


    论文《Deep Facial Expression Recognition: A Survey》里对其进行了介绍:

    ExpW [47]: The Expression in-the-Wild Database (ExpW) contains 91,793 faces downloaded using Google image search. Each of the face images was manually annotated as one of the seven basic expression categories. Non-face images were removed in the annotation process.


    angry disgust fear happy sad surprise neutral
    3671 3995 1088 30537 10559 7060 34883







    label.lst: each line indicates an image as follows:
    image_name face_id_in_image face_box_top face_box_left face_box_right face_box_bottom face_box_cofidence expression_label
    for expression label:
    "0" "angry"
    "1" "disgust"
    "2" "fear"
    "3" "happy"
    "4" "sad"
    "5" "surprise"
    "6" "neutral"
     author = {Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang},
     title = {From Facial Expression Recognition to Interpersonal Relation Prediction},
     booktitle = {arXiv:1609.06426v2},
     month = September,
     year = {2016}



    • 1、人脸倾斜
    原图 根据标签提取的人脸 我对齐后的人脸
    在这里插入图片描述 在这里插入图片描述 在这里插入图片描述
    • 2、无关数据
      总有些与脸无关的数据,尽管Non-face images were removed in the annotation process.
    原图 根据标签提取的人脸
    在这里插入图片描述 在这里插入图片描述


    原图 根据标签提取的人脸
    在这里插入图片描述 在这里插入图片描述


    angry disgust fear happy sad surprise neutral
    3585 3861 1053 29243 10039 6882 32642


    • 1、根据标签内容提取脸部图片
      img_face = image[face_box_top:face_box_bottom, face_box_left:face_box_right, :]
    • 2、对脸部图片进行关键点检测
    • 3、通过关键点进行人脸对齐
    • 4、保存人脸至各个类别





    angry disgust fear happy sad surprise neutral
    3671 3995 1088 30537 10559 7060 34883



    angry disgust fear happy sad surprise neutral
    3585 3861 1053 29243 10039 6882 32642



  • CK+ 是表情识别领域最为常见的数据集之一!包括8种基本表情(包括中性的话)。 数据库包括123个subjects, 593 个 image sequence,每个image sequence的最后一张 Frame 都有action units 的label,而在这593个image ...
  • CK+ 是表情识别领域最为常见的数据集之一!包括8种基本表情(包括中性的话)。 数据库包括123个subjects, 593 个 image sequence,每个image sequence的最后一张 Frame 都有action units 的label,而在这593个image ...
  • CK+ 是表情识别领域最为常见的数据集之一!包括8种基本表情(包括中性的话)。 数据库包括123个subjects, 593 个 image sequence,每个image sequence的最后一张 Frame 都有action units 的label,而在这593个image ...
  • ['angry', 'disgust', 'fear','happy', 'sad', 'surprise', 'neutral']七类表情数据集
  • 表情识别数据集难点

    千次阅读 2019-05-22 22:23:32
    随着FER文献将其主要焦点转移到具有挑战性的野外环境条件,许多研究人员致力于采用深度学习技术来处理困难,例如光照变化,遮挡,非正面头部姿势,身份偏差和识别 低强度表达。 鉴于FER是一项数据驱动的任务,并且...

    随着FER文献将其主要焦点转移到具有挑战性的野外环境条件,许多研究人员致力于采用深度学习技术来处理困难,例如光照变化,遮挡,非正面头部姿势,身份偏差和识别 低强度表达。 鉴于FER是一项数据驱动的任务,并且训练足够深的网络以捕获与细微表达相关的变形需要大量的训练数据,深度FER系统面临的主要挑战是缺乏关于数量的训练数据 和质量。

    由于不同年龄段,文化和性别的人以不同方式显示和解释面部表情,理想的面部表情数据集预计包括具有精确面部属性标签的丰富样本图像,不仅仅是表达,还包括其他属性,如年龄,性别和种族 ,这将有助于使用深度学习技术,如多任务深度网络和转移学习,对跨年龄范围,跨性别和跨文化FER进行相关研究。 此外,尽管遮挡和多重问题在深部识别领域受到了相对广泛的关注,但是在深度FER中,遮挡 - 鲁棒性和姿势不变性问题受到的关注较少。 其中一个主要原因是缺乏具有遮挡类型和头部姿势注释的大规模面部表情数据集。

    另一方面,利用自然场景的大变化和复杂性准确地注释大量图像数据是构建表达数据集的明显障碍。合理的方法是在专家注释者的指导下采用众包模式[44],[46],[249]。此外,由专家提炼的全自动标记工具[43]可以替代提供近似但有效的注释。在这两种情况下,需要随后的可靠估计或标记学习过程来过滤掉噪声注释。特别是,几乎没有考虑真实场景并包含各种面部表情的相对大规模的数据集,这些数据集最近已公开,即EmotioNet [43],RAF-DB [44],[45]和AffectNet [46]。 ],我们预计随着技术的进步和互联网的广泛传播,将构建更多互补的面部表情数据集,以促进深度FER的发展

  • 大家好,欢迎来到我们人脸表情识别的专栏,这是专栏的第一篇文章,今天我们讨论的问题是关于表情识别的基本概念和数据集。 ...
  • 数据集可供TensorFlow通过VGGNet进行表情识别案例模型文件的训练
  • 表情识别数据集汇总

    万次阅读 热门讨论 2019-02-25 16:55:50
    参考文献:Deep Facial Expression Recognition: A Survey. 网址:https://arxiv.org/pdf/1804.08348.pdf CK+:http://www.pitt.edu/~emotion/ck-spread.htm MMI:https://mmifacedb.eu/ JAFFE:...
  • 该数据是kaggle2013年一个比赛的数据集,比赛网址为:https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data 数据包含三个文件:fer2013.bib,fer2013.csv...
  • 表情识别数据集(Jaffe\CK+\Fer2013)
  • FER 基于FER2013 Kaggle数据集的面部表情识别模型。 当前模型可实现约67%的精度。 在添加更多训练数据集以提高概括能力的过程中。 对模型体系结构进行一些调整可能会提高准确性。
  • 人脸表情识别 (1) 下载fer2013数据集和数据的处理-附件资源



1 2 3 4 5 ... 20
收藏数 392
精华内容 156