精华内容
下载资源
问答
  • android studio 中调用虹软SDK进行人脸检测,只是一个小界面,说明文档在压缩包中
  • 因为这篇博客开始写的时候是2018.3,到现在有一年多了,而虹软sdk发生了很大的变化,所以我这里重新把博客写一遍。 多谢各位看官在这一年对该篇博客的关注。...因为人脸识别sdk(ArcFace )2.0版本使用,需要进行...

    因为这篇博客开始写的时候是2018.3,到现在有一年多了,而虹软的sdk发生了很大的变化,所以我这里重新把博客写一遍。

    多谢各位看官在这一年对该篇博客的关注。

     

    1.账号注册


    首先在虹软开发者网站上注册(用手机号注册)

    http://www.arcsoft.com.cn/ai/arcface.html

     

    2.认证

    因为人脸识别sdk(ArcFace )2.0版本使用,需要进行个人认证或者企业认证,而1.2版本不需要任何认证,但是因为版本过老,代码很多情况不能使用,所以我后面都是在说如何使用 2.0版本

    个人认证需要上传身份证正反面照片,很简单,(PS:还好不要手持身份证照片,要不然很尬)

    审核时间:官方说是3天内,我是一个小时就认证通过了

     

    3.创建应用

    首先创建应用,然后添加SDK,记得选ArcFace,版本2.0

     

    添加完应用然后下载SDK(71.2MB,超大)

    并且点击 查看激活码,会显示 APP_ID 和 SDK_KEY ,这个很关键

     

    4.下载和运行DEMO

    在这个网页下载DEMO

    https://github.com/ArcsoftEscErd/ArcfaceDemo_Android

    然后直接运行这个demo,必定报错,因为我们下载的SDK还没使用呢

    创建libs文件夹,放入 SDK文件里的 arcsoft_face.jar

    创建jniLibs->armeabi-v7a,放入SDK文件里的libarcsoft_face.so和libarcsoft_face_engine.so

    将之前获取的APP_ID和SDK_KEY填入Common->Constants.java

     

    5.运行效果

    比如我使用的是 人脸属性检测(图片)

    face[0]:20(说我20岁,我没这么小)

    face[0]:male (说我是男的)

    face[0]:ALIVE(说我是活的)

    算了,不解释着东西了,大家自己看看英文吧,不懂得百度翻译,

     

    然后我们再使用 人脸对比 1:N(图片VS图片)

    选择主图和添加对比图(对比图能够添加多个)

    每个对比图,旁边都会有一个 similar=0.98  ,这个就是说人脸相似度了

     

    6.结语

    更加深奥的就靠大家一起研究了

     

     

     

    展开全文
  • 前言: 虹软sdk3.0是目前用过的最...曾在2018年使用虹软SDK1.0并写了博客 阅读链接 参考代码 这里参考了大佬的代码 我在他基础上再做实时人脸识别的开发。 下面是原文链接 https://my.oschina.net/u/4584428/blog/47

    前言:

    虹软sdk3.0是目前用过的最方便,效果最好的且免费的离线人脸识别SDK。
    提供的编程语音没有python,有大佬用c++代码接口转成python调用的,
    我在此基础上完善了一些功能,能够实现高精度多人脸实时人脸识别
    并提供了年龄/性别识别,活体检测,人脸3D角度等功能。

    曾在2018年使用过虹软SDK1.0并写了博客 阅读链接

    参考代码

    这里参考了大佬的代码 我在他基础上再做实时人脸识别的开发。
    下面是原文链接
    https://my.oschina.net/u/4584428/blog/4712244
    https://gitee.com/shellcoder/ArcFace-python

    下面教程只列出来重点部分 讲解
    其余部分 以及整个流程 可以自己看代码理解 并不是很麻烦
    官方文档也给到比较详细
    使用方法

    1. 将下载好的sdk3.0中的动态链接库放入到lib文件夹中
      enter description here
    2. demo_cam文件中添加自己的 APP_ID 和SDK_key
    3. 注册图片在文件夹asserts中 文件名就是这个人的信息

    我完善后的项目GitHub 链接

    获取SDK

    1. 进入虹软官网 链接
      登陆或者注册账号。
    2. 进入 链接 下载虹软SDK3.0
      enter description here
    3. 点击免费获取 选择平台 版本 语言 和应用(自己创建)
      enter description here
    4. 确认之后即可 下载相关SDK
      enter description here
    5. 关于APP_ID 和SDK_key
      上面的APP_ID 和SDK_key 要写入到代码中,是用于激活的。
      第一次运行代码需要联网,因为会做激活操作,首次运行激活成功后会得到一个ArcFace64.dat文件,当有此文件以后,可以不用执行激活代码。
      个人认证用户,每年免费激活限制 100 台终端。
    6. SDK目录说明
      enter description here

    人脸检测和人脸识别

    把c++的接口调出来python使用

    1. 激活 调用函数ASFOnlineActivation
      #激活接口,首次需联网激活
      res = ASFOnlineActivation(APPID, SDKKey)

      返回res 为状态码 如果是0或90114则为成功激活
      #获取激活文件信息
      res,activeFileInfo = ASFGetActiveFileInfo()

      能够返回SDK版本以及本机信息等 同样res为状态码 0位正确

    2. 初始化
      首先获取人脸识别引擎
      ArcFace() 创建一个对象

      初始化接口
      face_engine.ASFInitEngine(ASF_DETECT_MODE_IMAGE,ASF_OP_0_ONLY,30,10,5)

      • detectMode: VIDEO 模式/IMAGE 模式 ,VIDEO 模式:处理连续帧的图像数据 IMAGE 模式:处理单张的图像数据
      • detectFaceOrientPriority: 人脸检测角度,推荐单一角度检测;IMAGE 模式下不支持全角度(ASF_OP_0_HIGHER_EXT)检测
      • detectFaceScaleVal: 识别的最小人脸比例(图片长边与人脸框长边的比值),VIDEO 模式取值范围[2,32],推荐值为 16 ,IMAGE 模式取值范围[2,32],推荐值为 30
      • detectFaceMaxNum: 最大需要检测的人脸个数,取值范围[1,50]
      • combinedMask: 需要启用的功能组合,可多选 人脸检测为1 人脸特征为4 年龄检测8 性别检测16 RGB活体128 红外活体1024 选择多个功能时加起来就可以了。(比如上面我使用的是人脸识别 需要人脸检测和人脸特征 就是1+4 =5)
    3. 预处理
      需要注意的是 输入端图片宽度必须是4的倍数
      这个时候就需要对输入的图片尺寸进行处理 否则会报错

      def LoadImg(imagePath):
          """
            将输入图片长和 宽都变成4的倍数 符合要求
          """
          img = cv2.imdecode(np.fromfile(imagePath,dtype=np.uint8),-1)  # 读取中文命名的图片
          #img = cv2.imread(imagePath)
          sp = img.shape
          img = cv2.resize(img, (sp[1]//4*4, sp[0]//4*4))
          return img
      

      输入时图片的路径 输出是处理好的已经读取的图片

    4. 检测人脸 并 提取人脸特征

      img1 = cv2.imread ("asserts/1.jpg")
      #检测第一张图中的人脸
      res,detectedFaces1 = face_engine.ASFDetectFaces(img1)
      #print(detectedFaces1)  #图片中人脸信息
      if res==MOK:
          single_detected_face1 = ASF_SingleFaceInfo()
          single_detected_face1.faceRect = detectedFaces1.faceRect[0]
          single_detected_face1.faceOrient = detectedFaces1.faceOrient[0]
          res ,face_feature1= face_engine.ASFFaceFeatureExtract(img1,single_detected_face1)
          if (res!=MOK):
              print ("ASFFaceFeatureExtract 1 fail: {}".format(res))
      else:
          print("ASFDetectFaces 1 fail: {}".format(res))
      

      使用函数face_engine.ASFDetectFaces()检测人脸
      获取到图片中的人脸坐标(list) 人脸角度(list) 人脸个数(int)

      使用ASFFaceFeatureExtract() 提取人脸特征
      获取到特征 用于人脸比对

    5. 人脸比对

    res,score = face_engine.ASFFaceFeatureCompare(face_feature1,face_feature2)
    输入的是两个人脸的人脸特征 返回的第一个是状态码 第二个是相似度
    普通照片 阈值0.8即可

    1. 年龄 性别等其他
      在初始化引擎时指定的功能内 即是加上 需要添加的功能
      获取年龄
    res = face_engine.ASFProcess(img1,detectedFaces1,processMask)
    print(processMask)
    if res == MOK:
        # 获取年龄
        res,ageInfo = face_engine.ASFGetAge()
    
    ageInfo即是检测到的年龄。
    
    其他功能获取方式类似 可以参考代码
    

    实时人脸识别

    这里面使用到了官方提到的faceID即是 在判断同一个人的情况下就不再多次进行人脸特征提取进行人脸识别
    这样就能减少运行时间。
    enter description here
    enter description here

    选取的一张运行截图
    enter description here

    但是如果第一章faceID判断失误会导致后面的全部判断错。为了防止这种情况发生,在代码里面进行了判断 如果faceID连续出现了5帧 则将进行重新识别。

    else

    多线程人脸识别还没搞懂 还没尝试怎么做
    enter description here

    如果文章有帮助到你,关注 点赞 收藏

    研究的其他人脸图像相关博客推荐

    1. 大公司API SDK调用
      基于python、face++实现人脸检测、人脸识别
      基于python、虹软实现人脸检测,人脸识别
      基于python3,百度AI实现人脸检测,人脸识别
    2. 使用现有库函数
      基于python3、 face_recognition 实现人脸检测,人脸识别
      基于Python,dlib实现人脸检测
      基于Python,opencv实现人脸检测
    3. 制作的完整开源项目
      基于海康摄像头进行人脸识别
      python人脸识别、语音合成、智能签到系统
      人脸识别、语音识别系统
      使用树莓派实现的口罩检测

    本人接下来一段时间 承接人脸图像算法 IOT相关的毕设 比赛等。有需要的可私信,或加q:1639206518

    展开全文
  • 这无形中给虹软SDK的应用带来了较大的挑战。尤其在多人场景下,需要指针移动来获取全部人脸数据。本文通过在.net 5环境下,实现一个基于C/S模式的多人实时识别案例,希望对广大.NETer在运用虹软S

    一、前言
    虹软开发SDK以来,其免费使用的营销策略,成功降低了中小企业使用人脸识别技术的成本。然而,对.NET开发者来说,虹软没有提供C#版本的SDK供开发者直接调用(为什么JAVA就有?!),而是建议开发者利用C++版本封装。大龄的C系程序员都一般是从C开始学习的,但对年轻的开发者来说,指针操作似乎非常棘手。这无形中给虹软SDK的应用带来了较大的挑战。尤其在多人场景下,需要指针移动来获取全部人脸数据。本文通过在.net 5环境下,实现一个基于C/S模式的多人实时识别案例,希望对广大.NETer在运用虹软SDK的时候有一定参考意义。

    二、项目结构
    1.开发环境 .net5(正式版) 或 .net core3.1 (前后端都是!)
    2.Client端(WPF框架)
    w1.png
    3.Service端(gRPC框架)
    s1.png
    4.开发工具 / 平台
    VS2019 / Win10

    三、项目依赖(nuget)
    1.Client端
    w2.png2.Service端
    s2.png
    四,项目主要流程
    Step1. 客服端监控提取图像种人脸特征。
    Step2. 客服端将人脸特征封装入Request Stream,发送至服务端。
    Step3. 服务端逐一解析Request Stream中人脸特征,并进行对比识别。
    Step4. 服务端将结果写入Response Stream返回。
    Step5. 客服端逐一解析Response Stream并显示。

    五,核心代码解析

    1.C++ dll 封装
    建议把虹软的dll封装成一个.net core类库,方便前后端调用。

    1.png

    using System;
    using System.Runtime.InteropServices;
    
    namespace ArcSoft
    {
        public class Arcsoft_Face_3_0
        {
            public const string Dll_PATH = "libarcsoft_face_engine.dll";
    
            /// <summary>
            /// 获取激活文件信息。
            /// </summary>
            /// <param name="activeFileInfo">激活文件信息</param>
            /// <returns></returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFGetActiveFileInfo(IntPtr activeFileInfo);
    
            /// <summary>
            /// 用于在线激活SDK。
            /// </summary>
            /// <param name="appId">官网获取的APPID</param>
            /// <param name="sdkKey">官网获取的SDKKEY</param>
            /// <returns></returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFOnlineActivation(string appId, string sdkKey);
    
            /// <summary>
            /// 激活人脸识别SDK引擎函数,ASFActivation 接口与ASFOnlineActivation 功能一致,用于兼容老用户。
            /// </summary>
            /// <param name="appId">SDK对应的AppID</param>
            /// <param name="sdkKey">SDK对应的SDKKey</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFActivation(string appId, string sdkKey);
    
            /// <summary>
            /// 初始化引擎
            /// </summary>
            /// <param name="detectMode">AF_DETECT_MODE_VIDEO 视频模式 | AF_DETECT_MODE_IMAGE 图片模式</param>
            /// <param name="detectFaceOrientPriority">检测脸部的角度优先值,推荐:ASF_OrientPriority.ASF_OP_0_HIGHER_EXT</param>
            /// <param name="detectFaceScaleVal">用于数值化表示的最小人脸尺寸</param>
            /// <param name="detectFaceMaxNum">最大需要检测的人脸个数</param>
            /// <param name="combinedMask">用户选择需要检测的功能组合,可单个或多个</param>
            /// <param name="hEngine">初始化返回的引擎handle</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFInitEngine(uint detectMode, int detectFaceOrientPriority, int detectFaceScaleVal, int detectFaceMaxNum, int combinedMask, ref IntPtr hEngine);
    
            /// <summary>
            /// 人脸检测
            /// </summary>
            /// <param name="hEngine">引擎handle</param>
            /// <param name="width">图像宽度</param>
            /// <param name="height">图像高度</param>
            /// <param name="format">图像颜色空间</param>
            /// <param name="imgData">图像数据</param>
            /// <param name="detectedFaces">人脸检测结果</param>
            /// <param name="detectModel">预留字段,当前版本使用默认参数即可</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFDetectFaces(IntPtr hEngine, int width, int height, int format, IntPtr imgData, IntPtr detectedFaces, int detectModel);
    
            /// <summary>
            /// 检测人脸信息。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="ImgData">图像数据</param>
            /// <param name="detectedFaces">检测到的人脸信息</param>
            /// <param name="detectModel">预留字段,当前版本使用默认参数即可</param>
            /// <returns>人脸信息</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFDetectFacesEx(IntPtr hEngine, IntPtr ImgData, out IntPtr detectedFaces, int detectModel);
    
            /// <summary>
            /// 单人脸特征提取
            /// </summary>
            /// <param name="hEngine">引擎handle</param>
            /// <param name="width">图像宽度,为4的倍数</param>
            /// <param name="height">图像高度,YUYV/I420/NV21/NV12格式为2的倍数;BGR24/GRAY/DEPTH_U16格式无限制</param>
            /// <param name="format">图像颜色空间</param>
            /// <param name="imgData">图像数据</param>
            /// <param name="faceInfo">单人脸信息(人脸框、人脸角度)</param>
            /// <param name="faceFeature">提取到的人脸特征信息</param>
            /// <returns>人脸特征信息</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFFaceFeatureExtract(IntPtr hEngine, int width, int height, int format, IntPtr imgData, IntPtr faceInfo, IntPtr faceFeature);
    
            /// <summary>
            /// 单人特征提取。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="imgData">图像数据</param>
            /// <param name="faceInfo">单人脸信息(人脸框、人脸角度)</param>
            /// <param name="feature">提取到的人脸特征信息</param>
            /// <returns></returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFFaceFeatureExtractEx(IntPtr hEngine, IntPtr imgData, IntPtr faceInfo, IntPtr feature);
    
            /// <summary>
            /// 人脸特征比对,输出比对相似度。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="feature1">人脸特征</param>
            /// <param name="feature2">人脸特征</param>
            /// <param name="confidenceLevel">比对相似度</param>
            /// <param name="compareModel">选择人脸特征比对模型,默认为ASF_LIFE_PHOTO。
            /// 1. ASF_LIFE_PHOTO:用于生活照之间的特征比对,推荐阈值0.80;
            /// 2. ASF_ID_PHOTO:用于证件照或证件照和生活照之间的特征比对,推荐阈值0.82;</param>
            /// <returns>比对相似度</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFFaceFeatureCompare(IntPtr hEngine, IntPtr feature1, IntPtr feature2, ref float confidenceLevel, int compareModel);
    
            /// <summary>
            /// 设置RGB/IR活体阈值,若不设置内部默认RGB:0.5 IR:0.7。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="threshold">活体阈值,推荐RGB:0.5 IR:0.7</param>
            /// <returns>设置状态</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFSetLivenessParam(IntPtr hEngine, IntPtr threshold);
    
            /// <summary>
            /// 人脸属性检测
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="width">图片宽度,为4的倍数</param>
            /// <param name="height">图片高度,YUYV/I420/NV21/NV12格式为2的倍数;BGR24格式无限制;</param>
            /// <param name="format">支持YUYV/I420/NV21/NV12/BGR24</param>
            /// <param name="imgData">图像数据</param>
            /// <param name="detectedFaces">多人脸信息</param>
            /// <param name="combinedMask">1.检测的属性(ASF_AGE、ASF_GENDER、 ASF_FACE3DANGLE、ASF_LIVENESS),支持多选
            /// 2.检测的属性须在引擎初始化接口的combinedMask参数中启用</param>
            /// <returns>检测状态</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFProcess(IntPtr hEngine, int width, int height, int format, IntPtr imgData, IntPtr detectedFaces, int combinedMask);
    
            /// <summary>
            /// 人脸信息检测(年龄/性别/人脸3D角度),最多支持4张人脸信息检测,超过部分返回未知(活体仅支持单张人脸检测,超出返回未知),接口不支持IR图像检测。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="imgData">图像数据</param>
            /// <param name="detectedFaces">多人脸信息</param>
            /// <param name="combinedMask">1.检测的属性(ASF_AGE、ASF_GENDER、 ASF_FACE3DANGLE、ASF_LIVENESS),支持多选
            /// 2.检测的属性须在引擎初始化接口的combinedMask参数中启用</param>
            /// <returns>检测状态</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFProcessEx(IntPtr hEngine, IntPtr imgData, IntPtr detectedFaces, int combinedMask);
    
            /// <summary>
            /// 获取年龄信息
            /// </summary>
            /// <param name="hEngine">引擎handle</param>
            /// <param name="ageInfo">检测到的年龄信息</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFGetAge(IntPtr hEngine, IntPtr ageInfo);
    
            /// <summary>
            /// 获取性别信息
            /// </summary>
            /// <param name="hEngine">引擎handle</param>
            /// <param name="genderInfo">检测到的性别信息</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFGetGender(IntPtr hEngine, IntPtr genderInfo);
    
            /// <summary>
            /// 获取3D角度信息
            /// </summary>
            /// <param name="hEngine">引擎handle</param>
            /// <param name="p3DAngleInfo">检测到脸部3D角度信息</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFGetFace3DAngle(IntPtr hEngine, IntPtr p3DAngleInfo);
    
            /// <summary>
            /// 获取RGB活体信息。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="livenessInfo">检测到的活体信息</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFGetLivenessScore(IntPtr hEngine, IntPtr livenessInfo);
    
            /// <summary>
            /// 该接口仅支持单人脸IR 活体检测,超出返回未知。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="width">图片宽度,为4的倍数</param>
            /// <param name="height">图片高度</param>
            /// <param name="format">图像颜色格式</param>
            /// <param name="imgData">图像数据</param>
            /// <param name="detectedFaces">多人脸信息</param>
            /// <param name="combinedMask">目前仅支持ASF_IR_LIVENESS</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFProcess_IR(IntPtr hEngine, int width, int height, int format, IntPtr imgData, IntPtr detectedFaces, int combinedMask);
    
            /// <summary>
            /// 该接口仅支持单人脸IR 活体检测,超出返回未知。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="imgData">图像数据</param>
            /// <param name="detectedFaces">多人脸信息</param>
            /// <param name="combinedMask">目前仅支持ASF_IR_LIVENESS</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFProcessEx_IR(IntPtr hEngine, IntPtr imgData, IntPtr detectedFaces, int combinedMask);
    
            /// <summary>
            /// 获取IR活体信息。
            /// </summary>
            /// <param name="hEngine">引擎句柄</param>
            /// <param name="livenessInfo">检测到的IR活体信息</param>
            /// <returns></returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFGetLivenessScore_IR(IntPtr hEngine, IntPtr livenessInfo);
    
            /// <summary>
            /// 获取SDK版本信息。
            /// </summary>
            /// <returns>成功返回版本信息,失败返回Null。</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern ASF_VERSION ASFGetVersion();
    
            /// <summary>
            /// 销毁SDK引擎。
            /// </summary>
            /// <param name="pEngine">引擎handle</param>
            /// <returns>调用结果</returns>
            [DllImport(Dll_PATH, CallingConvention = CallingConvention.Cdecl)]
            public static extern int ASFUninitEngine(IntPtr pEngine);
        }
    
        /参数枚举/
        /// <summary>
        /// 检测模式
        /// </summary>
        public struct ASF_DetectMode
        {
            /// <summary>
            /// Video模式,一般用于多帧连续检测
            /// </summary>
            public const uint ASF_DETECT_MODE_VIDEO = 0x00000000;
    
            /// <summary>
            /// Image模式,一般用于静态图的单次检测
            /// </summary>
            public const uint ASF_DETECT_MODE_IMAGE = 0xFFFFFFFF;
        }
    
        /// <summary>
        /// 人脸检测方向
        /// </summary>
        public struct ArcSoftFace_OrientPriority
        {
            /// <summary>
            /// 常规预览下正方向
            /// </summary>
            public const int ASF_OP_0_ONLY = 0x1;
    
            /// <summary>
            /// 基于0°逆时针旋转90°的方向
            /// </summary>
            public const int ASF_OP_90_ONLY = 0x2;
    
            /// <summary>
            /// 基于0°逆时针旋转270°的方向
            /// </summary>
            public const int ASF_OP_270_ONLY = 0x3;
    
            /// <summary>
            /// 基于0°旋转180°的方向(逆时针、顺时针效果一样)
            /// </summary>
            public const int ASF_OP_180_ONLY = 0x4;
    
            /// <summary>
            /// 全角度
            /// </summary>
            public const int ASF_OP_0_HIGHER_EXT = 0x5;
        }
    
        /// <summary>
        /// 检测到的人脸角度
        /// </summary>
        public struct ArcSoftFace_OrientCode
        {
            public const int ASF_OC_0 = 0x1; // 0度
            public const int ASF_OC_90 = 0x2; // 90度
            public const int ASF_OC_270 = 0x3; // 270度
            public const int ASF_OC_180 = 0x4; // 180度
            public const int ASF_OC_30 = 0x5; // 30度
            public const int ASF_OC_60 = 0x6; // 60度
            public const int ASF_OC_120 = 0x7; // 120度
            public const int ASF_OC_150 = 0x8; // 150度
            public const int ASF_OC_210 = 0x9; // 210度
            public const int ASF_OC_240 = 0xa; // 240度
            public const int ASF_OC_300 = 0xb; // 300度
            public const int ASF_OC_330 = 0xc; // 330度
        }
    
        /// <summary>
        /// 检测模型
        /// </summary>
        public struct ASF_DetectModel
        {
            public const int ASF_DETECT_MODEL_RGB = 0x1; //RGB图像检测模型
            //预留扩展其他检测模型
        }
    
        /// <summary>
        /// 人脸比对可选的模型
        /// </summary>
        public struct ASF_CompareModel
        {
            public const int ASF_LIFE_PHOTO = 0x1;  //用于生活照之间的特征比对,推荐阈值0.80
            public const int ASF_ID_PHOTO = 0x2;    //用于证件照或生活照与证件照之间的特征比对,推荐阈值0.82
        }
    
        /// <summary>
        /// 支持的颜色空间颜色格式
        /// </summary>
        public struct ASF_ImagePixelFormat
        {
            //8-bit Y 通道,8-bit 2x2 采样 V 与 U 分量交织通道
            public const int ASVL_PAF_NV21 = 2050;
            //8-bit Y 通道,8-bit 2x2 采样 U 与 V 分量交织通道
            public const int ASVL_PAF_NV12 = 2049;
            //RGB 分量交织,按 B, G, R, B 字节序排布
            public const int ASVL_PAF_RGB24_B8G8R8 = 513;
            //8-bit Y 通道, 8-bit 2x2 采样 U 通道, 8-bit 2x2 采样 V 通道
            public const int ASVL_PAF_I420 = 1537;
            //YUV 分量交织, V 与 U 分量 2x1 采样,按 Y0, U0, Y1, V0 字节序排布
            public const int ASVL_PAF_YUYV = 1289;
            //8-bit IR图像
            public const int ASVL_PAF_GRAY = 1793;
            //16-bit IR图像,ASVL_PAF_DEPTH_U16 只是预留。
            public const int ASVL_PAF_DEPTH_U16 = 3074;
        }
    
        /// <summary>
        /// 算法功能常量值
        /// </summary>
        public struct FaceEngineMask
        {
            //人脸检测
            public const int ASF_FACE_DETECT = 0x00000001;
            //人脸特征
            public const int ASF_FACERECOGNITION = 0x00000004;
            //年龄
            public const int ASF_AGE = 0x00000008;
            //性别
            public const int ASF_GENDER = 0x00000010;
            //3D角度
            public const int ASF_FACE3DANGLE = 0x00000020;
            //RGB活体
            public const int ASF_LIVENESS = 0x00000080;
            //IR活体
            public const int ASF_IR_LIVENESS = 0x00000400;
        }
    
        /数据结构/
        /// <summary>
        /// SDK版本信息。
        /// </summary>
        [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)]
        public struct ASF_VERSION
        {
            //版本号
            public IntPtr Version;
            //构建日期
            public IntPtr BuildDate;
            //版权说明
            public IntPtr CopyRight;
        }
    
        /// <summary>
        /// 激活文件信息。
        /// </summary>
        [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)]
        public struct ASF_ActiveFileInfo
        {
            /// <summary>
            /// 开始时间
            /// </summary>
            public IntPtr startTime;
    
            /// <summary>
            /// 截止时间
            /// </summary>
            public IntPtr endTime;
    
            /// <summary>
            /// 平台
            /// </summary>
            public IntPtr platform;
    
            /// <summary>
            /// sdk类型
            /// </summary>
            public IntPtr sdkType;
    
            /// <summary>
            /// APPID
            /// </summary>
            public IntPtr appId;
    
            /// <summary>
            /// SDKKEY
            /// </summary>
            public IntPtr sdkKey;
    
            /// <summary>
            /// SDK版本号
            /// </summary>
            public IntPtr sdkVersion;
    
            /// <summary>
            /// 激活文件版本号
            /// </summary>
            public IntPtr fileVersion;
        }
    
        /// <summary>
        /// 人脸框信息。
        /// </summary>
        public struct MRECT
        {
            public int left;
            public int top;
            public int right;
            public int bottom;
        }
    
        /// <summary>
        /// 单人脸信息。
        /// </summary>
        public struct ASF_SingleFaceInfo
        {
            // 人脸框
            public MRECT faceRect;
            //人脸角度
            public int faceOrient;
        }
    
        /// <summary>
        /// 多人脸信息。
        /// </summary>
        public struct ASF_MultiFaceInfo
        {
            // 人脸框数组
            public IntPtr faceRects;
            // 人脸角度数组
            public IntPtr faceOrients;
            // 检测到的人脸数
            public int faceNum;
            // 一张人脸从进入画面直到离开画面,faceID不变。在VIDEO模式下有效,IMAGE模式下为空。
            public IntPtr faceID;
        }
    
        /// <summary>
        /// 人脸特征。
        /// </summary>
        public struct ASF_FaceFeature
        {
            // 人脸特征
            public IntPtr feature;
            // 人脸特征长度
            public int featureSize;
        }
    
        /// <summary>
        /// 年龄信息。
        /// </summary>
        public struct ASF_AgeInfo
        {
            //0:未知; >0:年龄
            IntPtr ageArray;
            //检测的人脸数
            int num;
        }
    
        /// <summary>
        /// 性别信息。
        /// </summary>
        public struct ASF_GenderInfo
        {
            //0:男性; 1:女性; -1:未知
            IntPtr genderArray;
            //检测的人脸数
            int num;
        }
    
        /// <summary>
        /// 3D角度信息。
        /// </summary>
        public struct ASF_Face3DAngle
        {
            //横滚角
            public IntPtr roll;
            //偏航角
            public IntPtr yaw;
            //俯仰角
            public IntPtr pitch;
            //0:正常; 非0:异常
            public IntPtr status;
            //检测的人脸个数
            public IntPtr num;
        }
    
        /// <summary>
        /// 活体置信度。
        /// </summary>
        public struct ASF_LivenessThreshold
        {
            // BGR活体检测阈值设置,默认值0.5
            float thresholdmodel_BGR;
            // IR活体检测阈值设置,默认值0.7
            float thresholdmodel_IR;
        }
    
        /// <summary>
        /// 活体信息。
        /// </summary>
        public struct ASF_LivenessInfo
        {
            //0:非真人; 1:真人;-1:不确定; -2:传入人脸数 > 1;-3: 人脸过小;-4: 角度过大;-5: 人脸超出边界
            public IntPtr isLive;
            //检测的人脸个数
            public int num;
        }
    
        /// <summary>
        /// 图像数据信息。
        /// </summary>
        public struct ASVLOFFSCREEN
        {
            public uint u32PixelArrayFormat;
            public int i32Width;
            public int i32Height;
            [MarshalAs(UnmanagedType.ByValArray, SizeConst = 4, ArraySubType = UnmanagedType.SysUInt)]
            public IntPtr[] ppu8Plane;
            [MarshalAs(UnmanagedType.ByValArray, SizeConst = 4, ArraySubType = UnmanagedType.I4)]
            public int[] pi32Pitch;
        }
    }
    
    using ArcSoft.Utilities;
    using System;
    using System.Collections.Concurrent;
    using System.Collections.Generic;
    using System.Drawing;
    using System.Drawing.Imaging;
    using System.IO;
    using System.Runtime.InteropServices;
    
    
    namespace ArcSoft
    {
        public class Arcsoft_Face_Action : Arcsoft_Face_3_0, IEnginePoor
        {
            public string AppID { get; }
            public string AppKey { get; }
            public int FaceEngineNums { get; set; }
            public int IDEngineNums { get; set; }
            public int AIEngineNums { get; set; }
            public ConcurrentQueue<IntPtr> FaceEnginePoor { get; set; }
            public ConcurrentQueue<IntPtr> IDEnginePoor { get; set; }
            public ConcurrentQueue<IntPtr> AIEnginePoor { get; set; }
    
            public Arcsoft_Face_Action()
            {
    
            }
    
            public Arcsoft_Face_Action(string appId, string appKey)
            {
                int retCode = -1;
                try
                {
                    retCode = ASFOnlineActivation(appId, appKey);
                    if (retCode == 0)
                    {
                    }
                    else if (retCode == 90114)
                    {
                    }
                    else
                    {
                        throw new Exception("SDK激活失败,错误码:" + retCode);
                    }
                    AppID = appId;
                    AppKey = appKey;
                }
                catch (Exception ex)
                {
                    throw new Exception($"Arcsoft_Face_Action 初始化失败,异常:{ex.Message}");
                }
            }
    
            public IntPtr InitASFEnginePtr(int faceMask, bool isImageMode = true)
            {
                IntPtr pEngines = IntPtr.Zero;
                int retCode = -1;
                try
                {
                    if (isImageMode)
                    {
                        retCode = ASFInitEngine(ASF_DetectMode.ASF_DETECT_MODE_IMAGE, ArcSoftFace_OrientPriority.ASF_OP_0_HIGHER_EXT, ParmsBestPractice.detectFaceScaleVal_Image, ParmsBestPractice.detectFaceMaxNum, faceMask, ref pEngines);
                    }
                    else
                    {
                        retCode = ASFInitEngine(ASF_DetectMode.ASF_DETECT_MODE_VIDEO, ArcSoftFace_OrientPriority.ASF_OP_0_HIGHER_EXT, ParmsBestPractice.detectFaceScaleVal_Video, ParmsBestPractice.detectFaceMaxNum, faceMask, ref pEngines);
                    }
                    if (retCode == 0)
                    {
                    }
                    else
                    {
                        throw new Exception("SDK初始化失败,错误码:" + retCode);
                    }
                    return pEngines;
                }
                catch (Exception ex)
                {
                    throw new Exception("ASFFunctions->ASFFunctions, generate exception as: " + ex);
                }
            }
    
            public static ASF_MultiFaceInfo DetectMultipleFace(IntPtr pEngine, ImageInfo imageInfo)
            {
                ASF_MultiFaceInfo multiFaceInfo = new ASF_MultiFaceInfo();
                IntPtr pMultiFaceInfo = Marshal.AllocHGlobal(Marshal.SizeOf<ASF_MultiFaceInfo>());
                try
                {
                    int retCode = ASFDetectFaces(pEngine, imageInfo.width, imageInfo.height, imageInfo.format, imageInfo.imgData, pMultiFaceInfo, ASF_DetectModel.ASF_DETECT_MODEL_RGB);
                    multiFaceInfo = Marshal.PtrToStructure<ASF_MultiFaceInfo>(pMultiFaceInfo);
                    return multiFaceInfo;
                }
                catch
                {
                    return multiFaceInfo;
                }
                finally
                {
                    Marshal.FreeHGlobal(pMultiFaceInfo);
                }
            }
    
            public static List<MarkFaceInfor> DetectMultipleFaceAllInformation(IntPtr pEngine, ImageInfo imageInfo, bool extractFaceData = false)
            {
                List<MarkFaceInfor> infors = new List<MarkFaceInfor>();
                ASF_MultiFaceInfo multiFaceInfo = new ASF_MultiFaceInfo();
                IntPtr pMultiFaceInfo = Marshal.AllocHGlobal(Marshal.SizeOf<ASF_MultiFaceInfo>());
                try
                {
                    int retCode = ASFDetectFaces(pEngine, imageInfo.width, imageInfo.height, imageInfo.format, imageInfo.imgData, pMultiFaceInfo, ASF_DetectModel.ASF_DETECT_MODEL_RGB);
                    multiFaceInfo = Marshal.PtrToStructure<ASF_MultiFaceInfo>(pMultiFaceInfo);
                    for (int faceIndex = 0; faceIndex < multiFaceInfo.faceNum; faceIndex++)
                    {
                        ASF_SingleFaceInfo singleFaceInfo = new ASF_SingleFaceInfo();
                        singleFaceInfo.faceRect = Marshal.PtrToStructure<MRECT>(multiFaceInfo.faceRects + Marshal.SizeOf<MRECT>() * faceIndex);
                        singleFaceInfo.faceOrient = Marshal.PtrToStructure<int>(multiFaceInfo.faceOrients + Marshal.SizeOf<int>() * faceIndex);
                        MarkFaceInfor markFaceInfor = new MarkFaceInfor(singleFaceInfo.faceRect.left, singleFaceInfo.faceRect.top, singleFaceInfo.faceRect.right - singleFaceInfo.faceRect.left, singleFaceInfo.faceRect.bottom - singleFaceInfo.faceRect.top);
                        markFaceInfor.faceID = Marshal.PtrToStructure<int>(multiFaceInfo.faceID + Marshal.SizeOf<int>() * faceIndex);
                        if (extractFaceData)
                        {
                            markFaceInfor.faceFeatureData = ExtractSingleFaceFeature(pEngine, imageInfo, singleFaceInfo.faceRect, singleFaceInfo.faceOrient);
                        }
                        infors.Add(markFaceInfor);
                    }
                    return infors;
                }
                catch (Exception ex)
                {
                    throw new Exception($"Arcsoft_Face_Action-->DetectMultipleFaceAllInformation 异常,异常信息:{ex.Message}");
                }
                finally
                {
                    Marshal.FreeHGlobal(pMultiFaceInfo);
                }
            }
    
            public static bool ExtractFeaturesFromMemoryStream(Stream ms, IntPtr engine, out List<byte[]> facesFeature, out string errorString)
            {
                facesFeature = new List<byte[]>();
                errorString = null;
                try
                {
                    ImageInfo imageInfo = new ImageInfo();
                    ASF_MultiFaceInfo facesInfo = new ASF_MultiFaceInfo();
                    imageInfo = ImageHelper.ReadBMPFormStream(ms);
                    facesInfo = DetectMultipleFace(engine, imageInfo);
                    if (facesInfo.faceNum == 0)
                    {
                        errorString = "检测到人脸数量为0,请免冠正对镜头重新识别!";
                        return false;
                    }
                    if (facesInfo.faceNum > 1)
                    {
                        errorString = "检测到多张人脸,请多余人员退出识别区,再重新识别!";
                        return false;
                    }
                    facesFeature = ExtractAllFeatures(engine, imageInfo, facesInfo);
                    return true;
                }
                catch
                {
                    errorString = "算法错误,请检查输入后重试!";
                    return false;
                }
                finally
                {
                    GC.Collect();
                }
            }
    
            private static byte[] ExtractSingleFaceFeature(IntPtr pEngine, ImageInfo imageInfo, MRECT rect, int faceOrient)
            {
                var singleFaceInfo = new ASF_SingleFaceInfo();
                singleFaceInfo.faceRect = rect;
                singleFaceInfo.faceOrient = faceOrient;
                IntPtr pSingleFaceInfo = Marshal.AllocHGlobal(Marshal.SizeOf<ASF_SingleFaceInfo>());
                Marshal.StructureToPtr(singleFaceInfo, pSingleFaceInfo, false);
                IntPtr pFaceFeature = Marshal.AllocHGlobal(Marshal.SizeOf<ASF_FaceFeature>());
                try
                {
                    int retCode = ASFFaceFeatureExtract(pEngine, imageInfo.width, imageInfo.height, imageInfo.format, imageInfo.imgData, pSingleFaceInfo, pFaceFeature);
                    if (retCode == 0)
                    {
                        ASF_FaceFeature faceFeature = Marshal.PtrToStructure<ASF_FaceFeature>(pFaceFeature);
                        byte[] feature = new byte[faceFeature.featureSize];
                        Marshal.Copy(faceFeature.feature, feature, 0, faceFeature.featureSize);
                        return feature;
                    }
                    if (retCode == 81925)
                    {
                        throw new Exception("人脸特征检测结果置信度低!");
                    }
                    else
                    {
                        return null;
                    }
                }
                catch (Exception ex)
                {
                    throw new Exception($"Arcsoft_Face_Action-->ExtractSingleFaceFeature exception: {ex.Message}");
                }
                finally
                {
                    Marshal.FreeHGlobal(pSingleFaceInfo);
                    Marshal.FreeHGlobal(pFaceFeature);
                }
            }
    
            public static List<byte[]> ExtractAllFeatures(IntPtr pEngine, ImageInfo imageInfo, ASF_MultiFaceInfo multiFaceInfo)
            {
                try
                {
                    ASF_SingleFaceInfo singleFaceInfo = new ASF_SingleFaceInfo();
                    List<byte[]> results = new List<byte[]>();
                    for (int index = 0; index < multiFaceInfo.faceNum; index++)
                    {
                        singleFaceInfo.faceRect = Marshal.PtrToStructure<MRECT>(multiFaceInfo.faceRects + Marshal.SizeOf<MRECT>() * index);
                        singleFaceInfo.faceOrient = Marshal.PtrToStructure<int>(multiFaceInfo.faceOrients + Marshal.SizeOf<int>() * index);
                        byte[] singleFaceFeature = ExtractSingleFaceFeature(pEngine, imageInfo, singleFaceInfo.faceRect, singleFaceInfo.faceOrient);
                        if (singleFaceFeature != null)
                        {
                            results.Add(singleFaceFeature);
                        }
                    }
                    return results;
                }
                catch (Exception ex)
                {
                    throw new Exception("Arcsoft_Face_Action-->ExtractAllFeatures exception " + ex);
                }
                finally
                {
                    Marshal.FreeHGlobal(imageInfo.imgData);
                }
            }
    
            public static IntPtr GetBMP_Ptr(Bitmap image, out int width, out int height, out int pitch)
            {
                IntPtr imageDataPtr = IntPtr.Zero;
                try
                {
                    width = -1;
                    height = -1;
                    pitch = -1;
                    byte[] imageData = ReadBMP(image, ref width, ref height, ref pitch);
                    imageDataPtr = Marshal.AllocHGlobal(imageData.Length);
                    Marshal.Copy(imageData, 0, imageDataPtr, imageData.Length);
                    return imageDataPtr;
                }
                catch (Exception ex)
                {
                    Marshal.FreeHGlobal(imageDataPtr);
                    throw new Exception($"Arcsoft_Face_Action-->GetBMP_Ptr exception as:{ex.Message}");
                }
            }
    
            public static byte[] ReadBMP(Bitmap image, ref int width, ref int height, ref int pitch)
            {
                //将Bitmap锁定到系统内存中,获得BitmapData
                BitmapData data = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
                //位图中第一个像素数据的地址。它也可以看成是位图中的第一个扫描行
                IntPtr ptr = data.Scan0;
                //定义数组长度
                int soureBitArrayLength = data.Height * Math.Abs(data.Stride);
                byte[] sourceBitArray = new byte[soureBitArrayLength];
                //将bitmap中的内容拷贝到ptr_bgr数组中
                Marshal.Copy(ptr, sourceBitArray, 0, soureBitArrayLength); width = data.Width;
                height = data.Height;
                pitch = Math.Abs(data.Stride);
                int line = width * 3;
                int bgr_len = line * height;
                byte[] destBitArray = new byte[bgr_len];
                for (int i = 0; i < height; ++i)
                {
                    Array.Copy(sourceBitArray, i * pitch, destBitArray, i * line, line);
                }
                pitch = line;
                image.UnlockBits(data);
                return destBitArray;
            }
    
            public static ASVLOFFSCREEN ChangeMat2ASVLOFFSCREEN(Bitmap image)
            {
                int width = -1;
                int height = -1;
                int pitch = -1;
                IntPtr imagePtr = GetBMP_Ptr(image, out width, out height, out pitch);
                ASVLOFFSCREEN offInput = new ASVLOFFSCREEN();
                offInput.u32PixelArrayFormat = 513;
                offInput.ppu8Plane = new IntPtr[4];
                offInput.ppu8Plane[0] = imagePtr;
                offInput.i32Width = width;
                offInput.i32Height = height;
                offInput.pi32Pitch = new int[4];
                offInput.pi32Pitch[0] = pitch;
                return offInput;
            }
    
            public static IntPtr PutFeatureByteIntoFeatureIntPtr(byte[] data)
            {
                try
                {
                    if (data.Length > 0)
                    {
                        ASF_FaceFeature localFeature = new ASF_FaceFeature();
                        localFeature.featureSize = data.Length;
                        localFeature.feature = Marshal.AllocHGlobal(localFeature.featureSize);
                        Marshal.Copy(data, 0, localFeature.feature, data.Length);
                        IntPtr intPtrFeature = Marshal.AllocHGlobal(Marshal.SizeOf<ASF_FaceFeature>());
                        Marshal.StructureToPtr(localFeature, intPtrFeature, false);
                        return intPtrFeature;
                    }
                    else
                    {
                        return IntPtr.Zero;
                    }
                }
                catch
                {
                    return IntPtr.Zero;
                }
            }
    
            private int InitEnginePool()
            {
                try
                {
                    for (int index = 0; index < FaceEngineNums; index++)
                    {
                        IntPtr enginePtr = IntPtr.Zero;
                        Arcsoft_Face_Action faceAction = new Arcsoft_Face_Action(AppID, AppKey);
                        enginePtr = faceAction.InitASFEnginePtr(ParmsBestPractice.faceBaseMask);
                        PutEngine(FaceEnginePoor, enginePtr);
                        Console.WriteLine($"FaceEnginePoor add {enginePtr}");
                    }
                    for (int index = 0; index < IDEngineNums; index++)
                    {
                        IntPtr enginePtr = IntPtr.Zero;
                        Arcsoft_Face_Action faceAction = new Arcsoft_Face_Action(AppID, AppKey);
                        enginePtr = faceAction.InitASFEnginePtr(ParmsBestPractice.faceBaseMask);
                        PutEngine(IDEnginePoor, enginePtr);
                        Console.WriteLine($"IDEnginePoor add {enginePtr}");
                    }
                    for (int index = 0; index < AIEngineNums; index++)
                    {
                        IntPtr enginePtr = IntPtr.Zero;
                        int aiMask = FaceEngineMask.ASF_AGE | FaceEngineMask.ASF_GENDER | FaceEngineMask.ASF_FACE3DANGLE | FaceEngineMask.ASF_LIVENESS;
                        Arcsoft_Face_Action faceAction = new Arcsoft_Face_Action(AppID, AppKey);
                        enginePtr = faceAction.InitASFEnginePtr(ParmsBestPractice.faceBaseMask | aiMask);
                        PutEngine(AIEnginePoor, enginePtr);
                        Console.WriteLine($"AIEnginePoor add {enginePtr}");
                    }
                    return 0;
                }
                catch (Exception ex)
                {
                    throw new Exception($"InitEnginePool--> exception {ex}");
                }
            }
    
            public IntPtr GetEngine(ConcurrentQueue<IntPtr> queue)
            {
                IntPtr item = IntPtr.Zero;
                if (queue.TryDequeue(out item))
                {
                    return item;
                }
                else
                {
                    return IntPtr.Zero;
                }
            }
    
            public void PutEngine(ConcurrentQueue<IntPtr> queue, IntPtr item)
            {
                if (item != IntPtr.Zero)
                {
                    queue.Enqueue(item);
                }
            }
    
            public void Arcsoft_EnginePool(int faceEngineNums = 1, int idEngineNums = 0, int aiEngineNums = 0)
            {
                FaceEnginePoor = new ConcurrentQueue<IntPtr>();
                IDEnginePoor = new ConcurrentQueue<IntPtr>();
                AIEnginePoor = new ConcurrentQueue<IntPtr>();
                try
                {
                    FaceEngineNums = faceEngineNums;
                    IDEngineNums = idEngineNums;
                    AIEngineNums = aiEngineNums;
                    int status = InitEnginePool();
                    if (status != 0)
                    {
                        throw new Exception("引擎池初始化失败!");
                    }
                }
                catch (Exception ex)
                {
                    throw new Exception($"ArcSoft_EnginePool-->ArcSoft_EnginePool exception as: {ex}");
                }
            }
        }
    
        public struct ParmsBestPractice
        {
            //VIDEO模式取值范围[2,32],推荐值为16
            public const int detectFaceScaleVal_Video = 16;
    
            //MAGE模式取值范围[2,32],推荐值为30
            public const int detectFaceScaleVal_Image = 32;
    
            //最大需要检测的人脸个数,取值范围[1,50]
            public const int detectFaceMaxNum = 50;
    
            //人脸识别最基本功能。
            public const int faceBaseMask = FaceEngineMask.ASF_FACE_DETECT | FaceEngineMask.ASF_FACERECOGNITION;
    
            //RGB活体检测
            public const int faceLivingMask = FaceEngineMask.ASF_FACE_DETECT | FaceEngineMask.ASF_FACERECOGNITION | FaceEngineMask.ASF_LIVENESS;
    
            //process可传入属性组合只有ASF_AGE 、ASF_LIVENESS 、ASF_AGE 和 ASF_LIVENESS
            public const int processSupportMask = FaceEngineMask.ASF_AGE | FaceEngineMask.ASF_GENDER | FaceEngineMask.ASF_FACE3DANGLE | FaceEngineMask.ASF_LIVENESS;
        }
    }
    

    2.客户端多人特征提取与推流

    private async void DetectFaceTick(object sender, ElapsedEventArgs e)
            {
                Mat currentMat;
                lock (_obj)
                {
                    currentMat = mat;
                }
                List<MarkFaceInfor> markFaceInfors = ExtractFaceData(currentMat, _enginePool);
                if (markFaceInfors == null)
                {
                    return;
                }
                if (markFaceInfors.Count==0)
                {
                    return;
                }
                while(!_complete)
                {
                    Task.Delay(10).Wait();
                }
                _complete = false;
                var regFace = _client.RecongnizationByFace();
    
                //定义接收响应逻辑                       
                var regFaceResponseTask = Task.Run(async () =>
                {
                    WriteReceiveMsgAsync(string.Format("当前接收时间{0}", DateTime.Now.ToString("HH:mm:ss:fff")));
                    await foreach (var resp in regFace.ResponseStream.ReadAllAsync())
                    {                  
                        WriteReceiveMsgAsync($"姓名:{resp.PersonName},相似度:{resp.ConfidenceLevel}");
                    }
                });
    
                //开始调用           
                WriteSendMsgAsync(string.Format("开始发送时间{0}", DateTime.Now.ToString("HH:mm:ss:fff")));
                for (int index = 0; index < markFaceInfors.Count; index++)
                {
                    WriteSendMsgAsync($"发送编号:{index}");
                    await regFace.RequestStream.WriteAsync(new FaceRequest()
                    {
                        FaceFeature = Google.Protobuf.ByteString.CopyFrom(markFaceInfors[index].faceFeatureData)
                    });
                }           
                await regFace.RequestStream.CompleteAsync();
                
                //等待结果          
                await regFaceResponseTask;
                _complete = true;
            }
    

    3.服务端多人特征判断与返回

    public override async Task RecongnizationByFace(IAsyncStreamReader<FaceRequest> requestStream, IServerStreamWriter<FaceReply> responseStream, ServerCallContext context)
            {
                var faceQueue = new Queue<Google.Protobuf.ByteString>();
                IntPtr featurePoint = IntPtr.Zero;
                IntPtr engine = FaceProcess.GetEngine(FaceProcess.FaceEnginePoor);
                FaceReply faceReply = new FaceReply();
    
                while (await requestStream.MoveNext())
                {
                    //识别业务
                    byte[] featureByte = requestStream.Current.FaceFeature.ToByteArray();
                    if (featureByte.Length != 1032)
                    {
                        continue;
                    }
                    featurePoint = Arcsoft_Face_Action.PutFeatureByteIntoFeatureIntPtr(featureByte);
                    float maxScore = 0f;
    
                    while (engine == IntPtr.Zero)
                    {
                        Task.Delay(10).Wait();
                        engine = FaceProcess.GetEngine(FaceProcess.IDEnginePoor);
                    }
                    foreach (var f in StaticDataForTestUse.dbFaceInfor)
                    {
                        float result = 0;
                        int compareStatus = Arcsoft_Face_3_0.ASFFaceFeatureCompare(engine, featurePoint, f.Key, ref result, 1);
                        if (compareStatus == 0)
                        {
                            if (result >= maxScore)
                            {
                                maxScore = result;
                            }
                            if (result >= _faceMix && result >= maxScore)
                            {
                                faceReply.PersonName = f.Value;
                                faceReply.ConfidenceLevel = result;
                            }
                        }
                        else
                        {
                            faceReply.PersonName = $"对比异常 error code={compareStatus}";
                            faceReply.ConfidenceLevel = result;
                        }
                    }
                    if (maxScore < _faceMix)
                    {
                        faceReply.PersonName = $"未找到匹配者";
                        faceReply.ConfidenceLevel = maxScore;
                    }
                    Marshal.FreeHGlobal(featurePoint);
                    await responseStream.WriteAsync(faceReply);
                }
                FaceProcess.PutEngine(FaceProcess.FaceEnginePoor, engine);
            }
    

    六,效果演示

    1.客户端:
    在这里插入图片描述
    2.服务端:
    在这里插入图片描述
    七,源代码与使用
    源代码公布在GitHub上
    https://github.com/18628271760/MultipleFacesProcess

    具体操作:
    详见 项目 ReadMe.docx(图文并茂哦!)

    了解更多人脸识别产品相关内容请到虹软视觉开放平台

    展开全文
  • 如何使用 python 接入虹软 ArcFace SDK1.环境说明2.Arcface SDK基本数据结构封装3.Arcface SDK接口封装4.封装接口调用 公司需要在项目中使用人脸识别SDK,并且对信息安全的要求非常高,在详细了解市场上几个主流人脸...


    公司需要在项目中使用人脸识别SDK,并且对信息安全的要求非常高,在详细了解市场上几个主流人脸识别SDK后,综合来看虹软的Arcface SDK比较符合我们的需求,它提供了免费版本,并且可以在离线环境下使用,这一点非常符合我们对安全性的要求。但有个遗憾的事情,我们的项目主要使用了Python语言,虹软官方并没有提供Python版本的SDK,因此我自己使用Python封装了Arcface C++ SDK,便于在项目中使用,这里将主要过程写出来供大家探讨下。

    1.环境说明

    a.注意Win64环境的Python必须使用ArcFace C++(Win64) SDK,如果平台不一致, 否则可能会出现以下错误。

    OSError: [WinError 193] %1 不是有效的 Win32 应用程序
    

    b.由于SDK中涉及到内存操作,本文使用了ctypes包和cdll包提供的以下几种方式

    c_ubyte_p = POINTER(c_ubyte)
    memcpy = cdll.msvcrt.memcpy
    malloc = cdll.msvcrt.malloc
    malloc.restype = c_void_p
    free = cdll.msvcrt.free
    

    2.Arcface SDK基本数据结构封装

    在封装数据结构时,一定要注意参数类型,否则可能会导致程序出错。

    class MRECT(Structure):  # 人脸框
       _fields_ = [(u'left', c_int32),
                   (u'top', c_int32),
                   (u'right', c_int32),
                   (u'bottom', c_int32)]
    
    
    class ASFVersion(Structure):  # 版本信息     版本号 构建日期 版权说明
       _fields_ = [
           ('Version', c_char_p),
           ('BuildDate', c_char_p),
           ('CopyRight', c_char_p)]
    
    
    class ASFSingleFaceInfo(Structure):  # 单人脸信息  人脸框 人脸角度
       _fields_ = [
           ('faceRect', MRECT),
           ('faceOrient', c_int32)]
    
    
    class ASFMultiFaceInfo(Structure):  # 多人脸信息 人脸框数组 人脸角度数组 人脸数
       _fields_ = [
           (u'faceRect', POINTER(MRECT)),
           (u'faceOrient', POINTER(c_int32)),
           (u'faceNum', c_int32)]
    
    
    class ASFFaceFeature(Structure):  # 人脸特征 人脸特征 人脸特征长度
       _fields_ = [
           ('feature', c_void_p),
           ('featureSize', c_int32)]
    
    
    class ASFFace3DAngle(Structure):  # 人脸角度信息
       _fields_ = [
           ('roll', c_void_p),
           ('yaw', c_void_p),
           ('pitch', c_void_p),
           ('status', c_void_p),
           ('num', c_int32)]
    
    
    class ASFAgeInfo(Structure):  # 年龄 
       _fields_ = [
           (u'ageArray', c_void_p),
           (u'num', c_int32)]
    
    
    class ASFGenderInfo(Structure):  # 性别 
       _fields_ = [
           (u'genderArray', c_void_p),
           (u'num', c_int32)]
    
    
    class ASFLivenessThreshold(Structure):  # 活体阈值
       _fields_ = [
           (u'thresholdmodel_BGR', c_float),
           (u'thresholdmodel_IR', c_int32)]
    
    
    class ASFLivenessInfo(Structure):  # 活体信息
       _fields_ = [
           (u'isLive', c_void_p),
           (u'num', c_int32)]
    

    3.Arcface SDK接口封装

    a.接口封装之前需要加载dll库,Arcface SDK 提供的dll都需要加载。
    b.本文中图片格式使用了ASVL_PAF_RGB24_B8G8R8。
    c.每个接口都需要定义返回值以及参数类型,某些参数类型依赖前文所述的基本数据结构。

    from arcsoft_face_struct import *
    from ctypes import *
    from enum import Enum
    
    face_dll = CDLL("libarcsoft_face.dll")
    face_engine_dll = CDLL("libarcsoft_face_engine.dll")
    
    ASF_DETECT_MODE_VIDEO = 0x00000000
    ASF_DETECT_MODE_IMAGE = 0xFFFFFFFF
    
    ASF_NONE = 0x00000000
    ASF_FACE_DETECT = 0x00000001
    ASF_FACE_RECOGNITION = 0x00000004
    ASF_AGE = 0x00000008
    ASF_GENDER = 0x00000010
    ASF_FACE3DANGLE = 0x00000020
    ASF_LIVENESS = 0x00000080
    ASF_IR_LIVENESS = 0x00000400
    
    ASVL_PAF_RGB24_B8G8R8 = 0x201
    
    
    class ArcSoftFaceOrientPriority(Enum):
        ASF_OP_0_ONLY = 0x1,
        ASF_OP_90_ONLY = 0x2,
        ASF_OP_270_ONLY = 0x3,
        ASF_OP_180_ONLY = 0x4,
        ASF_OP_0_HIGHER_EXT = 0x5,
    
    
    activate = face_engine_dll.ASFActivation
    activate.restype = c_int32
    activate.argtypes = (c_char_p, c_char_p)
    
    
    init_engine = face_engine_dll.ASFInitEngine
    init_engine.restype = c_int32
    init_engine.argtypes = (c_long, c_int32, c_int32, c_int32, c_int32, POINTER(c_void_p))
    
    
    detect_face = face_engine_dll.ASFDetectFaces
    detect_face.restype = c_int32
    detect_face.argtypes = (c_void_p, c_int32, c_int32, c_int32, POINTER(c_ubyte), POINTER(ASFMultiFaceInfo))
    
    
    extract_feature = face_engine_dll.ASFFaceFeatureExtract
    extract_feature.restype = c_int32
    extract_feature.argtypes = (c_void_p, c_int32, c_int32, c_int32, POINTER(c_ubyte),
                                POINTER(ASFSingleFaceInfo), POINTER(ASFFaceFeature))
    
    
    compare_feature = face_engine_dll.ASFFaceFeatureCompare
    compare_feature.restype = c_int32
    compare_feature.argtypes = (c_void_p, POINTER(ASFFaceFeature),
                                POINTER(ASFFaceFeature), POINTER(c_float))
    
    
    set_liveness_param = face_engine_dll.ASFSetLivenessParam
    set_liveness_param.restype = c_int32
    set_liveness_param.argtypes = (c_void_p, POINTER(ASFLivenessThreshold))
    
    
    process = face_engine_dll.ASFProcess
    process.restype = c_int32
    process.argtypes = (c_void_p, c_int32, c_int32, c_int32, POINTER(c_ubyte),
                        POINTER(ASFMultiFaceInfo), c_int32)
    
    
    get_age = face_engine_dll.ASFGetAge
    get_age.restype = c_int32
    get_age.argtypes = (c_void_p, POINTER(ASFAgeInfo))
    
    
    get_gender = face_engine_dll.ASFGetGender
    get_gender.restype = c_int32
    get_gender.argtypes = (c_void_p, POINTER(ASFGenderInfo))
    
    
    get_3d_angle = face_engine_dll.ASFGetFace3DAngle
    get_3d_angle.restype = c_int32
    get_3d_angle.argtypes = (c_void_p, POINTER(ASFFace3DAngle))
    
    
    get_liveness_info = face_engine_dll.ASFGetLivenessScore
    get_liveness_info.restype = c_int32
    get_liveness_info.argtypes = (c_void_p, POINTER(ASFLivenessInfo))
    
    

    4.封装接口调用

    接下来按照下面的流程图介绍接口调用(此图使用 Microsoft Visio 2016自动生成)。
    流程.png

    下图是按照此流程处理得到的效果图,由于画面有限,只显示了年龄、性别、活体信息。
    效果图.jpg
    a.激活
    需要注意app_id和sdk_key需要使用字节类型。

        app_id = b""
        sdk_key = b""
        ret = arcsoft_face_func.activate(app_id, sdk_key)  # 激活
        if ret == 0 or ret == 90114:
            print("激活成功")
        else:
            print("激活失败:", ret)
    

    b.初始化
    初始化需要将所有需要的功能参数一次性传入,本文使用了人脸检测、特征提取等功能。

        mask = arcsoft_face_func.ASF_FACE_DETECT | \
                arcsoft_face_func.ASF_FACE_RECOGNITION | \
                arcsoft_face_func.ASF_AGE | \
                arcsoft_face_func.ASF_GENDER | \
                arcsoft_face_func.ASF_FACE3DANGLE |\
                arcsoft_face_func.ASF_LIVENESS
    
        engine = c_void_p()
        ret = arcsoft_face_func.init_engine(arcsoft_face_func.ASF_DETECT_MODE_IMAGE,
                                            arcsoft_face_func.ArcSoftFaceOrientPriority.ASF_OP_0_ONLY.value[0],
                                       30, 10, mask, byref(engine))
        if ret == 0:
            print("初始化成功")
        else:
            print("初始化失败:", ret)
    

    c.人脸检测
    本文使用了opencv读图,兼容性更好,并且自定义的数据结构记录图片信息,注意 ArcFace C++ SDK 要求传入的图像宽度需要是4的倍数,下面做了裁剪。

    class Image:
        def __init__(self):
            self.width = 0
            self.height = 0
            self.imageData = None
    
    def load_image(file_path):
        img = cv2.imread(file_path)
        sp = img.shape
        img = cv2.resize(img, (sp[1]//4*4, sp[0]))# 四字节对齐
    
        image = Image()
        image.width = img.shape[1]
        image.height = img.shape[0]
        image.imageData = img
        return image
    
    ###################### 人脸检测 ##################################
    
        image1 = load_image(r"1.jpg")
        image_bytes = bytes(image1.imageData)
        image_ubytes = cast(image_bytes, c_ubyte_p)
    
        detect_faces = ASFMultiFaceInfo()
        ret = arcsoft_face_func.detect_face(
            engine,
            image1.width,
            image1.height,
            arcsoft_face_func.ASVL_PAF_RGB24_B8G8R8,
            image_ubytes,
            byref(detect_faces)
        )
    
        if ret == 0:
            print("检测人脸成功")
        else:
            print("检测人脸失败:", ret)
    

    d.特征提取
    特征提取只支持单人脸,因此做了人脸处理操作,并且需要及时将提取的人脸特征拷贝一份,否则会被覆盖。

        single_face1 = ASFSingleFaceInfo()
        single_face1.faceRect = detect_faces.faceRect[0]
        single_face1.faceOrient = detect_faces.faceOrient[0]
    
        face_feature = ASFFaceFeature()
        ret = arcsoft_face_func.extract_feature(
            engine,
            image1.width,
            image1.height,
            arcsoft_face_func.ASVL_PAF_RGB24_B8G8R8,
            image_ubytes,
            single_face1,
            byref(face_feature)
        )
    
        if ret == 0:
            print("提取特征1成功")
        else:
            print("提取特征1失败:", ret)
    
        feature1 = ASFFaceFeature()
        feature1.featureSize = face_feature.featureSize
        feature1.feature = malloc(feature1.featureSize)
        memcpy(c_void_p(feature1.feature),
               c_void_p(face_feature.feature),
               feature1.featureSize)
    

    e.特征比对
    按照前文所述再提取一张人脸的特征,即可以进行下面的人脸特征比对操作

        compare_threshold = c_float()
        ret = arcsoft_face_func.compare_feature(
            engine, feature1, feature2, compare_threshold
        )
    
        free(c_void_p(feature1.feature))
        free(c_void_p(feature2.feature))
    
        if ret == 0:
            print("特征比对成功,相似度:", compare_threshold.value)
        else:
            print("特征比对失败:", ret)
    

    f.年龄、性别、3D Angle
    process接口目前提供了 年龄、性别、3D Angle、活体检测, 但年龄、性别、3D Angle支持多人脸,而活体只支持单人脸,因此下面分别处理。

        process_mask = arcsoft_face_func.ASF_AGE | \
                       arcsoft_face_func.ASF_GENDER | \
                       arcsoft_face_func.ASF_FACE3DANGLE
    
        ret = arcsoft_face_func.process(
            engine,
            image1.width,
            image1.height,
            arcsoft_face_func.ASVL_PAF_RGB24_B8G8R8,
            image_ubytes,
            byref(detect_faces),
            c_int32(process_mask)
        )
    
        if ret == 0:
            print("process成功")
        else:
            print("process失败:", ret)
    
    ######################## Age ################################
    
        age_info = ASFAgeInfo()
        ret = arcsoft_face_func.get_age(engine, byref(age_info))
    
        if ret == 0:
            print("get_age 成功")
            age_ptr = cast(age_info.ageArray, POINTER(c_int))
            for i in range(age_info.num):
                print("face", i, "age:", age_ptr[i])
        else:
            print("get_age 失败:", ret)
    
    ####################### Gender #################################
    
        gender_info = ASFGenderInfo()
        ret = arcsoft_face_func.get_gender(engine, byref(gender_info))
    
        if ret == 0:
            print("get_gender 成功")
            gender_ptr = cast(gender_info.genderArray, POINTER(c_int))
            for i in range(gender_info.num):
                print("face", i, "gender:",
                      "女性" if (gender_ptr[i] == 1) else (
                          "男性" if (gender_ptr[i] == 0) else "未知"
                      ))
        else:
            print("get_gender 失败:", ret)
    
    ####################### 3D Angle #################################
    
        angle_info = ASFFace3DAngle()
        ret = arcsoft_face_func.get_3d_angle(engine, byref(angle_info))
    
        if ret == 0:
            print("get_3d_angle 成功")
            roll_ptr = cast(angle_info.roll, POINTER(c_float))
            yaw_ptr = cast(angle_info.yaw, POINTER(c_float))
            pitch_ptr = cast(angle_info.pitch, POINTER(c_float))
            status_ptr = cast(angle_info.status, POINTER(c_int32))
            for i in range(angle_info.num):
                print("face", i,
                      "roll:", roll_ptr[i],
                      "yaw:", yaw_ptr[i],
                      "pitch:", pitch_ptr[i],
                      "status:", "正常" if status_ptr[i] == 0 else "出错")
    
        else:
            print("get_3d_angle 失败:", ret)
    

    g.RGB活体
    在活体检测之前建议按照实际场景设置活体阈值,不设置即使用默认阈值,这里设置了RGB活体的阈值为0.75。并将检测的多人脸分别转为单张人脸的参数传到接口中。

    ######################### 活体阈值设置 ###############################
        threshold_param = ASFLivenessThreshold()
        threshold_param.thresholdmodel_BGR = 0.75
        ret = arcsoft_face_func.set_liveness_param(engine,threshold_param)
    
        if ret == 0:
            print("set_liveness_param成功")
        else:
            print("set_liveness_param 失败:", ret)
    
        temp_face_info = ASFMultiFaceInfo()
        temp_face_info.faceNum = 1
        LP_MRECT = POINTER(MRECT)
        temp_face_info.faceRect = LP_MRECT(MRECT(malloc(sizeof(MRECT))))
        LP_c_long = POINTER(c_long)
        temp_face_info.faceOrient = LP_c_long(c_long(malloc(sizeof(c_long))))
    
        for i in range(detect_faces.faceNum):
            temp_face_info.faceRect[0] = detect_faces.faceRect[i]
            temp_face_info.faceOrient[0] = detect_faces.faceOrient[i]
    
            ret = arcsoft_face_func.process(
                engine,
                image1.width,
                image1.height,
                arcsoft_face_func.ASVL_PAF_RGB24_B8G8R8,
                image_ubytes,
                byref(temp_face_info),
                c_int32(arcsoft_face_func.ASF_LIVENESS)
            )
    
            if ret == 0:
                print("process成功")
            else:
                print("process失败:", ret)
            ## RGB活体检测
            ret = arcsoft_face_func.process(
                engine,
                image1.width,
                image1.height,
                arcsoft_face_func.ASVL_PAF_RGB24_B8G8R8,
                image_ubytes,
                byref(temp_face_info),
                c_int32(arcsoft_face_func.ASF_LIVENESS)
            )
    
            if ret == 0:
                print("process成功")
            else:
                print("process失败:", ret)
    
            liveness_info = ASFLivenessInfo()
            ret = arcsoft_face_func.get_liveness_info(engine, byref(liveness_info))
    
            if ret == 0:
                print("get_liveness_info 成功")
                liveness_ptr = cast(liveness_info.isLive, POINTER(c_int))
                print("face", i, "liveness:",
                      "非真人" if (liveness_ptr[0] == 0) else (
                          "真人" if (liveness_ptr[0] == 1) else (
                              "不确定" if (liveness_ptr[0] == -1) else (
                                  "传入人脸数>1" if (liveness_ptr[0] == -2) else
                                  (liveness_ptr[0])
                              )
                          )
                      ))
            else:
                print("get_liveness_info 失败:", ret)
    
    
    展开全文
  • 虹软人脸识别SDK1.2版本是免费的,但是官方提供的Demo是离线版本的,人脸数据保存在手机上,换一部手机就无法识别。本文基于其进行Android版本的人脸识别功能、性别识别功能、年龄识别功能开发,并在Java后端建立...
  • 公司需要在项目中使用人脸识别SDK,并且对信息安全的要求非常高,在详细了解市场上几个主流人脸识别SDK后,综合来看虹软的Arcface SDK比较符合我们的需求,它提供了免费版本,并且可以在离线环境下使用,这一点非常...
  • java人脸识别 虹软ArcFace 2.0,java SDK使用-进行人脸检测 虹软产品地址:http://ai.arcsoft.com.cn/product/arcface.html 虹软ArcFace功能简介 人脸检测 人脸跟踪 人脸属性检测(性别、年龄) 人脸三维角度检测 ...
  • 一、获取SDK 1.进入ArcFace2.0的申请地址 ...2.填写信息申请并提交 ...虹软ArcFace 2.0 Android包含人脸检测、年龄信息检测、性别信息检测、人脸三维角度检测、活体检测、人脸特征提取、人脸特征比对功能。 其中暴...
  • 1、登录虹软开发者平台,进行注册。 2、选取相应的平台SDK,下载。相应的demo也有,可以下载查看。 3、人脸静态图片识别、人脸动态视频识别。 4、项目中的运用。
  • 现在有很多人脸识别的技术我们可以拿来使用;但是个人认为还是离线端的SDK比较实用;所以个人一直在搜集人脸识别的SDK;原来使用开源的OpenCV;最近有个好友推荐虹软的ArcFace, 闲来无事就...SDK Demo使用步骤: ...
  • Android人脸识别--基于虹软免费SDK

    千次阅读 2017-09-18 13:17:30
    引言 苹果刚发布最新的iphone X,新增了人脸识别...这里我是使用虹软提供的免费人脸识别的SDK,此SDK也可根据不同应用场景设计,针对性强。包括人脸检测、人脸跟踪、人脸识别,即使在离线环境下也可正常运行。虹软
  • 虹软人脸识别SDK接入Milvus实现海量人脸快速检索虹软SDK及Milvus简介Milvus环境搭建开发环境配置 虹软SDK及Milvus简介 虹软人脸识别SDK是一款集人脸检测、人脸跟踪、人脸比对、人脸查找、人脸属性、IR/RGB活体检测多...
  • 虹软产品地址:...示例代码说明,其中图片操作部分用到了javacv,javacv中针对opencv进行了一些封装,可以很好的对图片进行处理。此处主要用到了opencv的几个函数 加载图片 IplI...
  • C#摄像头视频人脸识别画框 运行程序有相关代码说明 动态画框。识别视频上人脸部位
  • 虹软SDK推出了2.0版本,这个版本的所有API都集合在一个动态库里面,再通过引擎库调用,比1.2版本相对轻便了很多。 了解详情戳这里 小西瓜也迫不及待弄了一个新版本的C#实例,基于VS2013开发的,弄的过程中也遇到很...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 429
精华内容 171
关键字:

虹软sdk使用说明