精华内容
下载资源
问答
  • OpenCV实现人体姿态估计(人体关键点检测)OpenPose

    万次阅读 多人点赞 2019-08-04 11:53:03
    OpenCV实现人体姿态估计(人体关键点检测)OpenPose OpenPose人体姿态识别项目是美国卡耐基梅隆大学(CMU)基于卷积神经网络和监督学习并以Caffe为框架开发的开源库。可以实现人体动作、面部表情、手指运动等姿态...

    OpenCV实现人体姿态估计(人体关键点检测)OpenPose


     

    OpenPose人体姿态识别项目是美国卡耐基梅隆大学(CMU)基于卷积神经网络和监督学习并以Caffe为框架开发的开源库。可以实现人体动作、面部表情、手指运动等姿态估计。适用于单人和多人,具有极好的鲁棒性。是世界上首个基于深度学习的实时多人二维姿态估计应用,基于它的实例如雨后春笋般涌现。

    其理论基础来自Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields ,是CVPR 2017的一篇论文,作者是来自CMU感知计算实验室的曹哲(http://people.eecs.berkeley.edu/~zhecao/#top),Tomas Simon,Shih-En Wei,Yaser Sheikh 。

    人体姿态估计技术在体育健身、动作采集、3D试衣、舆情监测等领域具有广阔的应用前景,人们更加熟悉的应用就是抖音尬舞机。

    OpenPose的效果并不怎么好,强烈推荐《2D Pose人体关键点检测(Python/Android /C++ Demo)2D Pose人体关键点实时检测(Python/Android /C++ Demo)_pan_jinquan的博客-CSDN博客 ,提供了C++推理代码和Android Demo

    人体关键点检测需要用到人体检测,请查看鄙人另一篇博客:2D Pose人体关键点实时检测(Python/Android /C++ Demo)_pan_jinquan的博客-CSDN博客

    OpenPose项目Github链接:https://github.com/CMU-Perceptual-Computing-Lab/openpose

    OpenCV实现的Demo链接:https://github.com/PanJinquan/opencv-learning-tutorials/blob/master/opencv_dnn_pro/openpose-opencv/openpose_for_image_test.py


    1、实现原理

    输入一幅图像,经过卷积网络提取特征,得到一组特征图,然后分成两个岔路,分别使用 CNN网络提取Part Confidence Maps 和 Part Affinity Fields;


    得到这两个信息后,我们使用图论中的 Bipartite Matching(偶匹配) 求出Part Association,将同一个人的关节点连接起来,由于PAF自身的矢量性,使得生成的偶匹配很正确,最终合并为一个人的整体骨架;
    最后基于PAFs求Multi-Person Parsing—>把Multi-person parsing问题转换成graphs问题—>Hungarian Algorithm(匈牙利算法)
    (匈牙利算法是部图匹配最常见的算法,该算法的核心就是寻找增广路径,它是一种用增广路径求二分图最大匹配的算法。)


    2、实现神经网络

    阶段一:VGGNet的前10层用于为输入图像创建特征映射。

    阶段二:使用2分支多阶段CNN,其中第一分支预测身体部位位置(例如肘部,膝部等)的一组2D置信度图(S)。 如下图所示,给出关键点的置信度图和亲和力图 - 左肩。

    第二分支预测一组部分亲和度的2D矢量场(L),其编码部分之间的关联度。 如下图所示,显示颈部和左肩之间的部分亲和力。

    阶段三: 通过贪心推理解析置信度和亲和力图,对图像中的所有人生成2D关键点。


    3.OpenCV-OpenPose实现推理代码

    # -*-coding: utf-8 -*-
    """
        @Project: python-learning-notes
        @File   : openpose_for_image_test.py
        @Author : panjq
        @E-mail : pan_jinquan@163.com
        @Date   : 2019-07-29 21:50:17
    """
    
    import cv2 as cv
    import os
    import glob
    
    BODY_PARTS = {"Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
                  "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
                  "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
                  "LEye": 15, "REar": 16, "LEar": 17, "Background": 18}
    
    POSE_PAIRS = [["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],
                  ["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
                  ["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
                  ["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
                  ["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"]]
    
    
    def detect_key_point(model_path, image_path, out_dir, inWidth=368, inHeight=368, threshhold=0.2):
        net = cv.dnn.readNetFromTensorflow(model_path)
        frame = cv.imread(image_path)
        frameWidth = frame.shape[1]
        frameHeight = frame.shape[0]
        scalefactor = 2.0
        net.setInput(
            cv.dnn.blobFromImage(frame, scalefactor, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))
        out = net.forward()
        out = out[:, :19, :, :]  # MobileNet output [1, 57, -1, -1], we only need the first 19 elements
        assert (len(BODY_PARTS) == out.shape[1])
        points = []
        for i in range(len(BODY_PARTS)):
            # Slice heatmap of corresponging body's part.
            heatMap = out[0, i, :, :]
            # Originally, we try to find all the local maximums. To simplify a sample
            # we just find a global one. However only a single pose at the same time
            # could be detected this way.
            _, conf, _, point = cv.minMaxLoc(heatMap)
            x = (frameWidth * point[0]) / out.shape[3]
            y = (frameHeight * point[1]) / out.shape[2]
            # Add a point if it's confidence is higher than threshold.
            points.append((int(x), int(y)) if conf > threshhold else None)
        for pair in POSE_PAIRS:
            partFrom = pair[0]
            partTo = pair[1]
            assert (partFrom in BODY_PARTS)
            assert (partTo in BODY_PARTS)
    
            idFrom = BODY_PARTS[partFrom]
            idTo = BODY_PARTS[partTo]
    
            if points[idFrom] and points[idTo]:
                cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
                cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
                cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
    
        t, _ = net.getPerfProfile()
        freq = cv.getTickFrequency() / 1000
        cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))
    
        cv.imwrite(os.path.join(out_dir, os.path.basename(image_path)), frame)
        cv.imshow('OpenPose using OpenCV', frame)
        cv.waitKey(0)
    
    
    def detect_image_list_key_point(image_dir, out_dir, inWidth=480, inHeight=480, threshhold=0.3):
        image_list = glob.glob(image_dir)
        for image_path in image_list:
            detect_key_point(image_path, out_dir, inWidth, inHeight, threshhold)
    
    
    if __name__ == "__main__":
        model_path = "pb/graph_opt.pb"
        # image_path = "body/*.jpg"
        out_dir = "result"
        # detect_image_list_key_point(image_path,out_dir)
        image_path = "./test.jpg"
        detect_key_point(model_path, image_path, out_dir, inWidth=368, inHeight=368, threshhold=0.05)

    参考资料:

    [1].Python+OpenCV+OpenPose实现人体姿态估计(人体关键点检测)_不脱发的程序猿-CSDN博客

    展开全文
  • 人体姿态估计(Human Posture Estimation),是通过将图片中已检测到的人体关键点正确的联系起来,从而估计人体姿态。 人体关键点通常对应人体上有一定自由度的关节,比如颈、肩、肘、腕、腰、膝、踝等,如下图。 ...

    目录

    1、人体姿态估计简介

    2、人体姿态估计数据集

    3、OpenPose库

    4、实现原理

    5、实现神经网络

    6、实现代码


    1、人体姿态估计简介

    人体姿态估计(Human Posture Estimation),是通过将图片中已检测到的人体关键点正确的联系起来,从而估计人体姿态。

    人体关键点通常对应人体上有一定自由度的关节,比如颈、肩、肘、腕、腰、膝、踝等,如下图。

     

    通过对人体关键点在三维空间相对位置的计算,来估计人体当前的姿态。

    进一步,增加时间序列,看一段时间范围内人体关键点的位置变化,可以更加准确的检测姿态,估计目标未来时刻姿态,以及做更抽象的人体行为分析,例如判断一个人是否在打电话等。

    人体姿态检测的挑战:

    1. 每张图片中包含的人的数量是未知的。
    2. 人与人之间的相互作用是非常复杂的,比如接触、遮挡等,这使得联合各个肢体,即确定一个人有哪些部分变得困难。
    3. 图像中人越多,计算复杂度越大(计算量与人的数量正相关),这使得实时检测变得困难。

    2、人体姿态估计数据集

    由于缺乏高质量的数据集,在人体姿势估计方面进展缓慢。在近几年中,一些具有挑战性的数据集已经发布,这使得研究人员进行研发工作。人体姿态估计常用数据集:

    3、OpenPose库

     OpenPose人体姿态识别项目是美国卡耐基梅隆大学(CMU)基于卷积神经网络和监督学习并以Caffe为框架开发的开源库。可以实现人体动作、面部表情、手指运动等姿态估计。适用于单人和多人,具有极好的鲁棒性。是世界上首个基于深度学习的实时多人二维姿态估计应用,基于它的实例如雨后春笋般涌现。

    其理论基础来自Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields ,是CVPR 2017的一篇论文,作者是来自CMU感知计算实验室的曹哲(http://people.eecs.berkeley.edu/~zhecao/#top),Tomas Simon,Shih-En Wei,Yaser Sheikh 。

    人体姿态估计技术在体育健身、动作采集、3D试衣、舆情监测等领域具有广阔的应用前景,人们更加熟悉的应用就是抖音尬舞机。

    OpenPose项目Github链接:https://github.com/CMU-Perceptual-Computing-Lab/openpose

    4、实现原理

    1. 输入一幅图像,经过卷积网络提取特征,得到一组特征图,然后分成两个岔路,分别使用 CNN网络提取Part Confidence Maps 和 Part Affinity Fields;
    2. 得到这两个信息后,我们使用图论中的 Bipartite Matching(偶匹配) 求出Part Association,将同一个人的关节点连接起来,由于PAF自身的矢量性,使得生成的偶匹配很正确,最终合并为一个人的整体骨架;
    3. 最后基于PAFs求Multi-Person Parsing—>把Multi-person parsing问题转换成graphs问题—>Hungarian Algorithm(匈牙利算法)

    (匈牙利算法是部图匹配最常见的算法,该算法的核心就是寻找增广路径,它是一种用增广路径求二分图最大匹配的算法。)

    5、实现神经网络

    阶段一:VGGNet的前10层用于为输入图像创建特征映射。

    阶段二:使用2分支多阶段CNN,其中第一分支预测身体部位位置(例如肘部,膝部等)的一组2D置信度图(S)。 如下图所示,给出关键点的置信度图和亲和力图 - 左肩。

    第二分支预测一组部分亲和度的2D矢量场(L),其编码部分之间的关联度。 如下图所示,显示颈部和左肩之间的部分亲和力。

    阶段三: 通过贪心推理解析置信度和亲和力图,对图像中的所有人生成2D关键点。

    6、实现代码

    import cv2 as cv
    import numpy as np
    import argparse
    
    parser = argparse.ArgumentParser()
    parser.add_argument('--input', help='Path to image or video. Skip to capture frames from camera')
    parser.add_argument('--thr', default=0.2, type=float, help='Threshold value for pose parts heat map')
    parser.add_argument('--width', default=368, type=int, help='Resize input to specific width.')
    parser.add_argument('--height', default=368, type=int, help='Resize input to specific height.')
    
    args = parser.parse_args()
    
    BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
                   "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
                   "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
                   "LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }
    
    POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],
                   ["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
                   ["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
                   ["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
                   ["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]
    
    inWidth = args.width
    inHeight = args.height
    
    net = cv.dnn.readNetFromTensorflow("graph_opt.pb")
    
    cap = cv.VideoCapture(args.input if args.input else 0)
    
    while cv.waitKey(1) < 0:
        hasFrame, frame = cap.read()
        if not hasFrame:
            cv.waitKey()
            break
    
        frameWidth = frame.shape[1]
        frameHeight = frame.shape[0]
        
        net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))
        out = net.forward()
        out = out[:, :19, :, :]  # MobileNet output [1, 57, -1, -1], we only need the first 19 elements
    
        assert(len(BODY_PARTS) == out.shape[1])
    
        points = []
        for i in range(len(BODY_PARTS)):
            # Slice heatmap of corresponging body's part.
            heatMap = out[0, i, :, :]
    
            # Originally, we try to find all the local maximums. To simplify a sample
            # we just find a global one. However only a single pose at the same time
            # could be detected this way.
            _, conf, _, point = cv.minMaxLoc(heatMap)
            x = (frameWidth * point[0]) / out.shape[3]
            y = (frameHeight * point[1]) / out.shape[2]
            # Add a point if it's confidence is higher than threshold.
            points.append((int(x), int(y)) if conf > args.thr else None)
    
        for pair in POSE_PAIRS:
            partFrom = pair[0]
            partTo = pair[1]
            assert(partFrom in BODY_PARTS)
            assert(partTo in BODY_PARTS)
    
            idFrom = BODY_PARTS[partFrom]
            idTo = BODY_PARTS[partTo]
    
            if points[idFrom] and points[idTo]:
                cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
                cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
                cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
    
        t, _ = net.getPerfProfile()
        freq = cv.getTickFrequency() / 1000
        cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))
    
        cv.imshow('OpenPose using OpenCV', frame)

    本项目实现代码及模型参见网址:https://download.csdn.net/download/m0_38106923/11265524

     关注公众号,发送关键字:关键点检测,获取资源。

    展开全文
  • 基于深度学习目标检测和人体关键点检测的不健康坐姿检测(部分代码)
  • 《2D Pose人体关键点实时检测(Python/Android /C++ Demo)》https://panjinquan.blog.csdn.net/article/details/115765863
  • 为提升人体姿态估计在移动终端设备上的运行速度与实时性,提出一种改进的人体关键点检测算法。通过将 Mobilenetv2轻量级主干网络与深度可分离卷积模块相结合加速特征提取过程,使用精炼网络进行多尺度人体关键点预测...
  • 从零开始,讲解 数据标注、数据集制作、模型训练、模型测试、模型优化、环境搭建等方面内容,让学习者能够快速学到AI图像领域关键点技术,应用到实际的工作中去。
  • 人体关键点检测数据集

    千次阅读 2020-07-20 11:06:27
    目前COCO keypoint track是人体关键点检测的权威公开比赛之一。 COCO数据集中把人体关键点表示为17个关节,分别是鼻子,左右眼,左右耳,左右肩,左右肘,左右腕,左右臀,左右膝,左右脚踝。而人体关键点检测的...

    目录

    2D

    COCO

    MPII(MPII Human Pose Dataset)

    LSP(Leeds Sports Pose Dataset)-- Sport

    FLIC/FLIC-full(Frames Labeled In Cinema)-- Hollywood movies(CVPR2013)

    FLIC-plus Dataset(NIPS2014)

    AI Challenger

    PoseTrack

    3D

    human3.6M

    HumanEva

    MPI-INF-3DHP

    ALL

    Unite The People(Closing the Loop Between 3D and 2D Human Representations) -- Sport


    2D

    COCO

    https://cocodataset.org/#download

    目前COCO keypoint track是人体关键点检测的权威公开比赛之一。

    COCO数据集中把人体关键点表示为17个关节,分别是鼻子,左右眼,左右耳,左右肩,左右肘,左右腕,左右臀,左右膝,左右脚踝。而人体关键点检测的任务就是从输入的图片中检测到人体及对应的关键点位置。最多标注全身的17个关键点,平均一幅图像2个人,最多有13个人。

    MSCOCO样本数多于30W,多人关键点检测的主要数据集,主流数据集;

     

    MPII(MPII Human Pose Dataset)

    http://human-pose.mpi-inf.mpg.de/#results

    单人/多人人体关键点检测数据集,16个关键点坐标及其是否可见的信息,样本数25K,是单人人体关键点检测的主要数据集。标注数据的格式:使用mat的struct格式,对于人体关键点检测有用的数据如下:行人框:使用center和scale标注,人体尺度关于200像素高度,也就是除过了200。

    1、2数据的预处理,可以参考

    1>https://github.com/microsoft/human-pose-estimation.pytorch/tree/master/lib/dataset

    2>https://arxiv.org/abs/1804.06208

    3>https://github.com/leoxiaobin/deep-high-resolution-net.pytorch

     

    LSP(Leeds Sports Pose Dataset)-- Sport

    https://sam.johnson.io/research/lsp.html

    单人人体关键点检测数据集,关键点个数为14,样本数2K,在目前的研究中作为第二数据集使用。

    FLIC/FLIC-full(Frames Labeled In Cinema)-- Hollywood movies(CVPR2013)

    https://bensapp.github.io/flic-dataset.html

    单人人体关键点检测数据集,关键点个数为9,样本数2W,在目前的研究中作为第二数据集使用。

     

    FLIC-plus Dataset(NIPS2014)

    https://jonathantompson.github.io/flic_plus.htm

    FLIC-full的子集

    AI Challenger

    多人人体关键点检测数据集,关键点个数为14,样本数约38W,竞赛数据集;

    PoseTrack

    最新的关于人体骨骼关键点的数据集,多人人体关键点跟踪数据集,包含单帧关键点检测、多帧关键点检测、多人关键点跟踪三个人物,多于500个视频序列,帧数超过20K,关键点个数为15。

    3D

    human3.6M

    http://vision.imar.ro/human3.6m/description.php

    是3D人体姿势估计的最大数据集,由360万个姿势和相应的视频帧组成,这些视频帧包含11位演员从4个摄像机视角执行15项日常活动的过程。数据集庞大将近100G。

    HumanEva

    http://humaneva.is.tue.mpg.de/

    The HumanEva-I dataset contains 7 calibrated video sequences (4 grayscale and 3 color) that are synchronized with 3D body poses obtained from a motion capture system. The database contains 4 subjects performing a 6 common actions (e.g. walking, jogging, gesturing, etc.). The error metrics for computing error in 2D and 3D pose are provided to participants. The dataset contains training, validation and testing (with withheld ground truth) sets.

    MPI-INF-3DHP

    http://gvv.mpi-inf.mpg.de/3dhp-dataset/

    ALL

    Unite The People(Closing the Loop Between 3D and 2D Human Representations) -- Sport

    http://files.is.tuebingen.mpg.de/classner/up/

    展开全文
  • 基于深度学习目标检测和人体关键点检测的不健康坐姿检测 部分代码下载链接: 0.实验结果 1.标准坐姿的定义 There are lots of literatures discussing what kind of standards are considered as healthy sitting...

    基于深度学习目标检测和人体关键点检测的不健康坐姿检测

    代码下载链接:下载地址

    0.实验结果
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    1.标准坐姿的定义
    There are lots of literatures discussing what kind of standards are considered as healthy sitting postures. McAtamney et al. [2430] proposed that the lumbar spine angle and the cervical spine angle greater than 20° were judged as unhealthy sitting postures. Burgess-Limerick et al. [2531] stated a healthy distance of eye and human was about 40-70 cm. Springer [2632] et al. showed that the best angle for visual screen was 15°-30° below horizontal sight. Based on ergonomics [2733, 2834], our method comprehensively extracts features that are strongly correlated with sitting posture health from human body joints, persons and scenes [2329], as illustrated in Figure 4.
    在这里插入图片描述
    2. 不健康坐姿检测原理:
    通过检测人体关键点信息,结合目标检测中的场景信息,最终判断坐姿的健康与否。
    Abstract: Sitting with unhealthy sitting posture for a long time seriously harms human health and even leads to lumbar disease, cervical disease and myopia. Automatic vision-based detection of unhealthy sitting posture has become a hot research topic. However, the existing methods only focus on extracting features of human themselves and are lack of understanding relevancies among objects in the scene, and henceforth fail to recognize some types of unhealthy sitting postures in complicated environments. To alleviate these problems, a scene recognition and semantic analysis approach to unhealthy sitting posture detection in screen-reading is proposed in this paper. The key skeletal points of human body are detected and tracked with a Microsoft Kinect sensor. Meanwhile, a deep learning method, i.e. Faster R-CNN, is used in the scene recognition of our method to accurately detect objects and, extract relevant features. Then our method performs semantic analysis through Gaussian-Mixture behavioral clustering for scene understanding. The relevant features in the scene and the skeletal features extracted from human are fused into the semantic features to discriminate various types of sitting postures. Experimental results demonstrated that our method accurately and effectively detected various types of unhealthy sitting postures in screen-reading and avoided error detection in complicated environments. Compared with the existing methods, our proposed method detected more types of unhealthy sitting postures including those that the existing methods could not detect. Our method can be potentially applied and integrated as a medical assistance in health care or robotic systemtreatment in the workplace.
    Keywords: unhealthy sitting posture detection; deep learning; scene recognition; semantic analysis; behavioral clustering

    3. 判断流程示意图
    在这里插入图片描述
    4.参考文献

    483 References
    484 1. Hoogendoorn, W.E.; Bongers, P.M.; Vet, H.C. Flexion and Rotation of the Trunk and Lifting at Work 
    485 are Risk Factors for Low Back Pain: Results of a Prospective Cohort Study. Spine. 2014, 25, 3087-3092.
    486 2. Chandna, S.; Wang, W. Bootstrap Averaging for Model-Based Source Separation in Reverberant 
    487 Conditions. IEEE Trans. Audio. Speech. 2018, 26, 806-819.
    488 3. Lis, A.M.; Black, K.; Korn, H.; Nordin, M. Association between Sitting and Occupational LBP. Eur.
    489 Spine. J. 2007, 16, 283-298.
    490 4. O’Sullivan, P.B.; Grahamslaw, K.M.; Lapenskie, S.C. The Effect of Different Standing and Sitting 
    491 Posture on Trunk Muscle Activity in a Pain-free Population. Spine. 2002, 27, 1238-1124.
    492 5. Straker, L.; Mekhora, K. An Evaluation of Visual Display Unit Placement by Electronmygraphy, 
    493 Posture, Discomfort and Preference. Int. J. Ind. Ergonom. 2000, 26, 389-398.
    494 6. Grandjean, E.; Hünting, W. Ergonomics of Posture-review of Various Problems of Standing and Sitting 
    495 Posture. Appl. Ergon. 1977, 8, 135-140.
    496 7. Meyer, J.; Arnrich, B.; Schumm, J.; Troster, G. Design and Modeling of a Textile Pressure Sensor for 
    497 Sitting Posture Classification. IEEE Sens. J. 2010, 10, 1391-1398.Sensors 2018, 18, x FOR PEER REVIEW 19 of 20
    498 8. Mattmann, C.; Amft, O.; Harms, H.; Troster, G; Clemens, F. Recognizing Upper Body Postures using 
    499 Textile Strain Sensors. In the Proceedings of the 11th IEEE International Symposium on Wearable 
    500 Computers, Boston, MA, 2007, 29-36.
    501 9. Ma, S.; Cho, W.H.; Quan, C. H.; Lee, S. A Sitting Posture Recognition System Based on 3 Axis 
    502 Accelerometer. In the Proceedings of the IEEE CIBCB. Chiang Mai, 2016, 1-3.
    503 10. Foubert, N.; McKee, A.M.; Goubran, R.A.; Knoefel, F. Lying and Sitting Posture Recognition and 
    504 Rransition Detection Using A Pressure Sensor Array. In the Proceeding of 2012 IEEE International 
    505 Symposium on Medical Measurements and Applications Proceedings. Budapest, 2012, 1-6.
    506 11. Liang, G.; Cao, J.; Liu, X. Smart Cushion: A Practical System for Fine-grained Sitting Posture 
    507 Recognition. In the Proceedings of the IEEE PerCom Workshops. Kona, HI, 2017, 419-424.
    508 12. Huang, Y. R.; Ouyang, X. F. Sitting Posture Detection and Recognition Using Force Sensor. In the 
    509 Proceedings of the 5th International Conference on BioMedical Engineering and Informatics.
    510 Chongqing, 2012, 1117-1121.
    511 13. Song-Lin, W.; Rong-Yi, C. Human Behavior Recognition Based on Sitting Postures. In the Proceedings 
    512 of 3CA. Tainan, 2010, 138-141.
    513 14. Mu, L.; Li, K. Wu, C. A Sitting Posture Surveillance System Based on Image Processing Technology. In 
    514 the Proceedings of the 2nd International Conference on Computer Engineering and Technology. 
    515 Chengdu, 2010, V1-692-V1-695.
    516 15. Zhang, B.C.; Gao, Y.; Zhao, S. Local Derivative Pattern Versus Local Binary Pattern: Face Recognition 
    517 With High-Order Local Pattern Descriptor. IEEE Tans. Image Processing. 2009, 19, 533-544.
    518 16. Zhang, B.C.; Yang, Y.; Chen, C.; Yang, L. Action Recognition Using 3D Histograms of Texture and a
    519 Multi-Class Boosting Classifier. IEEE Tans. Image Processing. 2017, 26, 4648-4660.
    520 17. Wang, W.J.; Chang, J.W.; Huang, S.F. Human Posture Recognition Based on Images Captured by the 
    521 Kinect Sensor. Int. J. Adv. Robot. Syst. 2016, 13, 1.
    522 18. Yao, L.; Min, W.;Cui, H. A New Kinect Approach to Judge Unhealthy Sitting Posture Based on Neck 
    523 Angle and Torso Angle. In the Proceedings of the ICIG, Springer, Cham, 2017, 340-350.
    524 19. Zhang, B.C.; Perina, A.; Li, Z. Bounding Multiple Gaussians Uncertainty with Application to Object 
    525 Tracking. Int. J. Comput. Vision. 2018, 27, 4357-4366.
    526 20. Ponglangka, W.; Theera-Umpon, N.; Auephanwiriyakul, S. Eye-gaze Distance Estimation Based on 
    527 Gray-level Intensity of Image Iatch. In the Proceedings of the IEEE ISPACS, Chiang Mai, 2011, 1-5.
    528 21. Zhang, B.C.; Luan, S.; Chen, C. Latent Constrained Correlation Filter. IEEE Trans. Image Processing.
    529 2018, 27, 1038-1048.
    530 22. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection 
    531 and Semantic Segmentation. In the Proceedings of the IEEE CVPR. Columbus, OH, 2014, 580-587.
    532 23. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual 
    533 Recognition. IEEE Trans. PAMI. 2015, 37, 1904-1916.
    534 24. Girshick, R. Fast R-CNN. In the Proceedings of the IEEE ICCV. Santiago, 2015, 1440-1448.
    535 25. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region 
    536 Proposal Networks. IEEE Trans. PAMI. 2017, 39, 1137-1149.
    537 26. He, K.; Zhang, X.; Ren, S. Deep Residual Learning for Image Recognition. In the Proceedings of the 
    538 IEEE CVPR. 2016, 770-778.
    539 27. Luan, S.; Zhang, B.C.; Zhou, S. Gabor Convolutional Networks. IEEE Trans. Image Processing. 2018, 27, 
    540 4357-4366.
    541 28. E. Shelhamer, J. Long and T. Darrell, "Fully Convolutional Networks for Semantic Segmentation," IEEE 
    542 Trans. PAMI. 2017, 39, 640-651.
    543 29. Min, W.D.; Ciu, H.;Rao, H.;Li, Z.;Yao, L. Detection of Human Falls on Furniture Using Scene Analysis 
    544 Based on Deep Learning and Activity Characteristics. IEEE Access. 2018, 6: 9324-9335.
    545 30. McAtamney, L.; Corlett, E.N. RULA: A Survey Method for The Investigation of Work-related Upper 
    546 Limb Disorders. Appl. Ergon. 1993, 24, 91-99.
    547 31. Burgess-Limerick, R.; Plooy, A.; Ankrum, D.R. The Effect of Imposed and Self-selected Computer 
    548 Monitor Height on Posture and Gaze Angle. Clin. Biomech. 1998, 13, 584-592.
    549 32. Springer, T.J. VDT Workstations: A Comparative Evaluation of Alternatives. Appl. Ergon. 1982, 13, 211-
    550 212.Sensors 2018, 18, x FOR PEER REVIEW 20 of 20
    551 33. Shikdar, A.A; Al-Kindi M.A. Office Ergonomics: Deficiencies in Computer Workstation Design.
    552 International Journal of Occupational Safety & Ergonomics, 2007, 13(2):215-223.
    553 34. Wasenmüller, O.; Stricker, D. Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy 
    554 and Precision. In the Proceedings of Asian Conference on Computer Vision. Springer, Cham, 2016, 34-
    555 45
    
    展开全文
  • 人体关键点检测是计算机视觉中一个极具挑战性的研究。可用于:动作识别,异常行为检测,安防等。本文旨在提出一种基于深度学习的模型,解决人体关键点检测任务中存在的诸多问题,提升检测效果。该任务目前主要存在...
  • 人体关键点检测综述

    2020-01-13 14:44:46
    转载引用附原文链接 ...人体姿态估计(Human Pose Estimation)也称为人体关键点检测(Human Keypoints Detection)。对于人体姿态估计的研究,大致可做如下分类。 1. RGB vs RGBD 后者多一项Depth信...
  • MS COCO 目标检测 、人体关键点检测评价指标 https://blog.csdn.net/bryant_meng/article/details/108325287
  • 人体关键点检测大综合(2016-2020)

    千次阅读 2020-07-17 14:05:28
    1、目前COCO keypoint track是人体关键点检测的权威公开比赛之一。 COCO数据集中把人体关键点表示为17个关节,分别是鼻子,左右眼,左右耳,左右肩,左右肘,左右腕,左右臀,左右膝,左右脚踝。而人体关键点检测的...
  • 人体关键点检测数据集介绍

    千次阅读 2019-05-22 17:13:39
    一、COCO数据集 ... 训练集和验证集数据整体...每个人体关键点个数的分布情况,其中11-15这个范围的人体是最多的,有接近70000人,6-10其次,超过40000人,后面依次为16-17,2-5,1. K(BLOHKM) = (20000*13 +9*4...
  • 使用OpenVINO部署并行的人体关键点检测模型

    千次阅读 多人点赞 2020-07-17 16:33:01
    使用OpenVINO部署并行的人体关键点检测模型OpenVINO工具包 简介OpenVINO 推理引擎的使用使用模型优化器转化为OpenVINO 查看推理引擎可用设备基于 python api 单线程运行使用Intel 集成GPU 单线程运行使用Intel 第二...
  • Python OpenCV OpenPose,实现人体姿态估计
  • 人体姿态估计(Human Pose Estimation)也称为人体关键点检测(Human Keypoints Detection)。对于人体姿态估计的研究,大致可做如下分类。 1. RGB vs RGBD 后者多一项Depth信息,常用于3D人体姿态估计的研究。 2...
  • 人体关键点检测之——关节点

    千次阅读 2020-03-24 15:54:50
    https://blog.csdn.net/qq_21033779/article/details/84840307?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute....人体骨骼关键点检测已经非常成熟了,参考曹哲大神的论文,可以知...
  • 图像、视频或调用摄像头均能实现OpenCV实现人体姿态估计(人体关键点检测) OpenPose人体姿态识别项目是美国卡耐基梅隆大学(CMU)基于卷积神经网络和监督学习并以Caffe为框架开发的开源库。可以实现人体动作、面部...
  • caffee:https:https://github.com/CMU-Perceptual-Computing-Lab/openpose添加链接描述 这是openpose的原始版本,也是最好用的 keras版本:...同样很流畅,与caffe...
  • 姿态估计(人体关键点检测)之CPN

    千次阅读 2020-03-20 15:44:41
    CPN:Cascaded Pyramid Network for Multi-...此方法为TOP-DOWN方法,需依赖另一个人体检测模型,先把人体检测出来,再用此模型进行关键点检测。 模型分为两大部分,一部分是Globalnet,主体是resnet,负责初步检...
  • 人体关键点检测 | 综述(1)

    万次阅读 2018-07-18 10:22:30
    自顶向下(top-down):先检测出多个人,再对每一个人进行姿态估计(先检测单个人,再针对单个人做single-person pose estimation。),可以将人体detection的方法加上单人姿态估计方法来实现。 优点:思路直观,自然...
  • 1、PCK - Percentage of ...计算检测关键点与其对应的groundtruth间的归一化距离小于设定阈值的比例(the percentage of detections that fall within a normalized distance of the ground truth). ------------ ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 9,412
精华内容 3,764
关键字:

人体关键点检测