精华内容
下载资源
问答
  • 屏幕后处理效果(Screen Post Processing Effects),是游戏中实现屏幕特效的方法,有助于提升画面效果。本系列课程分为九个小节,将结合Demo学习在游戏中经常被应用的屏幕后处理特效。(PS:本系列旨在让大家对...

    屏幕后处理效果(Screen Post Processing Effects),是游戏中实现屏幕特效的方法,有助于提升画面效果。本系列课程分为九个小节,将结合Demo学习在游戏中经常被应用的屏幕后处理特效。(PS:本系列旨在让大家对后处理特效有所了解,并带大家实现一些基础特效,要想实现更为精细好看的特效还需要更多学习哦~)

    1)Real-Time Glow & Bloom(实时辉光)

    Bloom(Glow)特效表现为高光物体带有泛光效果,使画面的光影表现更加优秀。Bloom通常会搭配HDR和ToneMapping来得到更好的效果。本节介绍了相关基础知识,并给出了基于Unity的实现方式。

    2)Depth Of Field(景深)

    Depth Of Field(景深)来自于摄影中的一个基础概念,是指在摄影机镜头或其他成像器前沿能够取得清晰图像的成像所测定的被摄物体前后距离范围。处在景深范围内的物体,成像清晰;处在景深范围外的物体,成像模糊。同样在讲解基础知识的之外,还给出了具体的实现方式。

    标《孤岛危机》中的景深效果题

     

    3)Lens Flare —— Streak(拉丝)

    本小节采用一种较为简单的基于Dual Blur模糊算法思想的方法实现Streak(拉丝)效果。并给出了HDRP和URP版本的Streak效果的学习参考。

    《Mass Effect 2》中的Lens Flare Streak效果标题

     

    4)Silhouette Rendering(轮廓渲染)

    轮廓渲染也称之为描边(Outline),常出现在非真实感渲染中。本节在Unity中采用Build-in管线对描边效果进行实现。以下为别是描边、原图以及调节轮廓颜色的效果。

    5)Radial Blur(径向模糊)

    Radial Blur(径向模糊)具体表现为从中心向外呈辐射条状模糊。常常被应用于竞速游戏或者动作特效中,用于突出高速运动的视觉效果和突然拉近镜头产生的震撼效果。本节在Unity中采用Build-in管线对Radial Blur效果进行实现。

    《极品飞车》中的径向模糊效果标题

     

    6)Gamma Space起源

    从这一节开始要接触的后处理效果与色彩的调整相关,本节介绍了Gamma Space起源的两个主流观点,了解其背后对应的计算方式和现实意义。

    7)Linear Space工作流

    本节介绍了Gamma空间和Linear空间,以及Unity和Unreal两大引擎对Linear Space工作流的支持。对渲染要求不断提升的今天,伴随着真实性更强的基于物理的渲染(PBR)的到来,为了得到更加准确的计算结果,在Linear Space下进行渲染计算已经逐步被更多的团队采用。

    8)颜色模型

    上面介绍的两种工作流可以保证输入到着色器中的颜色数据是符合预期的。接下来着色器将对输入的颜色数据进行处理,就让我们一起来了解一些颜色的表达方式和常见的处理方式。本节讲解了HSV颜色模型,以及HSV和RGB两者的转换方式。

    9)色彩分级(Color Grading)

    这一节我们在后处理中对最终的画面做一些色彩分级(Color Grading)处理,并列举一些常见的 Color Grading 操作。通过饱和度、亮度、对比度、反色进行讲解,这些主要是目的是提升画面的表现。

    调整饱和度效果标题

    扫描图片二维码开启学习吧!

    更多精彩文章,可下载【在理】APP查看~

    (长按识别二维码下载)标题

     

    UWA全新改版,项目制带你全面起飞

    展开全文
  • 关于在“颜色分级中的详细步骤: (1)曝光: 让颜色整体缩放 // 曝光 float3 ColorGradePostExposure(float3 color) { return color * _ColorAdjustments.x; } (2)对比度 : 用的是中间灰度值: 0.42来计算 ...

    (1)HDR的格式: 在unity中是 RGBA 每个通道16位
    非HDR: 每个通道8位
    在这里插入图片描述
    在这里插入图片描述
    (2)逐步执行DrawCall时,你会注意到场景看起来比最终结果要暗。发生这种情况是因为这些步骤存储在HDR纹理中。由于线性颜色数据按原样显示,因此看起来很暗,它错误地解释为sRGB。
    在这里插入图片描述
    (3)Luminance函数: 客观测量发光体的亮度,是个客观值
    函数: luminance = 0.2125 * Red + 0.7154 * Green + 0.0721 * Blue;

    在这里插入图片描述
    代码和逻辑真的懒得粘贴了,工程中都有

    (3)bloom 中控制阈值threshold = 1: 即 只有颜色值大于1的才会被处理(HDR)

    用ToneMapping 将HDR 最后转换到 LDR 显示

    最后的关系 : Bloom 作用HDR 的结果 , 然后 被 ToneMapping 转化为LDR

    所以 Bloom 和 HDR , ToneMapping 三者的关系非常密切
    最后附上详细解释:
    Bloom解释
    ToneMapping解释
    最后附上git工程:git工程链接

    (4)颜色分级:
    Color Grading
    调整图像颜色共3步:
    (1) 颜色校正: 目的是 使图像与观察场景时的图像相匹配
    (2)颜色分级: 对最终图像进行颜色和亮度的改变和矫正 ,可以理解为增加滤镜
    以上两步可以合并为一个颜色分级的步骤
    (3)色调映射: 将HDR颜色映射到显示范围。

    仅应用色调映射的图像 往往不那么丰富多彩, 除非图像非常明亮
    ACES可以稍微增加深色的对比度,但是不能替代颜色分级。




    关于在“颜色分级中的详细步骤:

    (1)后曝光: 让颜色整体缩放

    // 后曝光
    float3 ColorGradePostExposure(float3 color)
    {
        return color * _ColorAdjustments.x;
    }
    
    

    (2)对比度 : 用的是中间灰度值: 0.42来计算
    对比度高: 黑的会越黑,白的会越白

    对比度低: 黑 和 白 向中间的灰色靠拢

    // 对比度: 最亮的白 和 最黑的黑的 对比(对比度高,可以提高画质的明亮清晰程度; 低对比度灰蒙蒙)
    // ACEScc_MIDGRAY= 0.4135 表示中间灰度值
    float3 ColorGradeingContrast(float3 color)
    {
        color = LinearToLogC(color);
        color =  (color - ACEScc_MIDGRAY) * _ColorAdjustments.y + ACEScc_MIDGRAY;
        return LogCToLinear(color);
    }
    
    

    (3)滤镜: 直接乘以颜色值进行过滤

    //  滤镜
    float3 ColorGradeColorFilter(float3 color)
    {
        return color * _ColorFilterId.rgb;
    }
    

    (4)色调偏移: ColorGradeing

    float3 ColorGradingHueShift(float3 color)
    {
        color = RgbToHsv(color);
        float hue = color.x + _ColorAdjustments.x;
        //  将色调限制在(0,1)  小于0: hue+1;    大于1:hue-1
        color.x = RotateHue(hue,0.0,1.0);
        return HsvToRgb(color);
    }
    

    在这里插入图片描述

    色调偏移是在HSV空间中计算的:
    在这里插入图片描述
    在这里插入图片描述
    (5)饱和度:

    //  饱和度: 
    float3 ColorGradingSaturation(float3 color)
    {
    	//用Luminance 获取亮度, 然后参与计算
        float luminance = Luminance(color);
        return (color - luminance) * _ColorAdjustments.w + luminance;
    }
    

    饱和度是指色彩的鲜艳程度,也称色彩的纯度。

    饱和度取决于该色中含色成分和消色成分(灰色)的比例。

    含色成分越大,饱和度越大;消色成分越大,饱和度越小

    展开全文
  • MPC的是()。

    千次阅读 2021-07-21 04:22:07
    相关题目与解析“MPC”的是多用途个人计算机。()药品非临床试验管理规范是下列哪一系统?()AGLPBGMPCGCPMPC是()。A.能处理声音的计算机B.能处理图像的计算机C.能进行通信处理的计算机D.能进行文本、PE80是...

    相关题目与解析

    “MPC”指的是多用途个人计算机。()

    药品非临床试验管理规范是指下列哪一系统?()AGLPBGMPCGCP

    MPC是指()。A.能处理声音的计算机B.能处理图像的计算机C.能进行通信处理的计算机D.能进行文本、

    PE80是指最小要求强度为()的聚乙烯材料。A.1.0MPB.2.0MpC.8.0MPD.10.0Mp

    有关计算机的缩略语MPC,NPC和WWW分别意指()

    MPC曾有几个等级标准?简述目前MPC标准的基本配置指标。

    指出ISAG定位MPC能力的API中,()用来查询终端与特定点间的距离。

    EAU指南中,治疗后7个月的PSA水平,是mPC的生存预后因。PSA降低至()之下可能有更长生存?

    指出ISAG定位MPC能力的SLA策略中,()用来在区域上报请求是否允许定位信息通知次数无限

    指出下列炭黑的中文名称,相应的ASTM标号,有何特性?HAF、ISAF、HS-ISAF、LS-HAF、MPC、FEF、EPC、GPF、SRF、ACET

    _______指的是由总支出自发变化所引起的实际GDP的总变化量与支出的自发变化量之间的比率。乘数可

    三高“井的”高压”是指地层压力达()MPa及以上。

    以下关于多媒体计算机的说法中,不正确的是()。

    ()是指每增加一单位某种商品或劳务的生产里,整个社会所必须支付的追加成本。

    ()指任一收入水平上储蓄在收入中的比率。

    名次配对1.需求规律:()2.等产量曲线:()3.利润:()4.垄断竞争:()5.基尼系数:()6.不完全信

    ()指任一收入水平上消费支出在收入中的比率。

    ()是指:防止第一步耐药突变菌株选择性增殖所需的最低抗菌药物浓度,在此浓度下,病原菌必须同时发生两步以上突变才能生长。

    膨胀聚苯板薄抹灰外墙外保温系统用膨胀聚苯板出厂检验,包括但不限于()等技术指标必须满足,出厂验收才能判定为合格。

    展开全文
  • 这里的3是RGB三个通道,每个通道都会产生一个结果,这里的16是x y w h boxScore x1 y1 x2 y2 x3 y3 x4 y4 x5 y5 idScore。 2.代码 这里以瑞芯微的代码为例,只不过在前面和后面加上了有关padding resize的处理...

    目录

    1.模型转换

    1.1 yolov5-face的pt模型转为onnx模型

    1.2 yolov5-face的onnx模型转换为rknn模型

    2.C++代码

    2.1 padding resize代码

    2.2 yolov5-face后处理(在瑞芯微yolov5后处理代码基础上修改)

    3 RV1126驱动升级

    4.完整的C++代码

    4.1 postprocess.h 

    4.2 postprocess.cc

     4.3 main.cc


    1.模型转换

    1.1 yolov5-face的pt模型转为onnx模型

    github上下载yolov5-face工程,然后利用里面的exprot.pypytorch模型转换为onnx模型,转换之前需要修改yolo.py代码,从github上下载yolov5-face工程,然后利用里面的exprot.pypytorch模型转换为onnx模型,转换之前需要修改yolo.py代码,

    1.	def forward(self, x):  
    2.	    # x = x.copy()  # for profiling  
    3.	    z = []  # inference output  
    4.	   # self.training |= self.export  
    5.	    if self.export:  
    6.	        for i in range(self.nl):  
    7.	            x[i] = self.m[i](x[i])  
    8.	            bs, _, ny, nx = x[i].shape  # x(bs,48,20,20) to x(bs,3,20,20,16)  
    9.	            x[i] = x[i].view(bs, self.na, self.no, ny, nx). permute(0, 1, 3, 4, 2).contiguous()  
    

    需要将代码中的 .permute(0, 1, 3, 4, 2)去掉,修改为如下代码

    1.	def forward(self, x):  
    2.	    # x = x.copy()  # for profiling  
    3.	    z = []  # inference output  
    4.	   # self.training |= self.export  
    5.	    if self.export:  
    6.	        for i in range(self.nl):  
    7.	            x[i] = self.m[i](x[i])  
    8.	            bs, _, ny, nx = x[i].shape  # x(bs,48,20,20) to x(bs,3,20,20,16)  
    9.            x[i] = x[i].view(bs, self.na, self.no, ny, nx).contiguous()  
    

     这样修改的原因是后面我们的C++代码解析的是3*16*20*20类型的,而不是3*20*20*16类型的。

    1.2 yolov5-face的onnx模型转换为rknn模型

    由于rknntoolkit1.6.0在转换yolov-face的模型时报错,因此首先需要安装toolkit1.7.1,然后再用https://github.com/airockchip/yolov5/tree/master/rknn中的onnx2rknn.py进行模型转换,转换脚本如下。

    1.	import os  
    2.	import sys  
    3.	import numpy as np  
    4.	from rknn.api import RKNN  
    5.	  
    6.	ONNX_MODEL = 'yolov5s-face.onnx'  
    7.	RKNN_MODEL = 'yolov5s-face.rknn'  
    8.	  
    9.	if __name__ == '__main__':  
    10.	  
    11.	    # Create RKNN object  
    12.	    rknn = RKNN(verbose=True)  
    13.	  
    14.	    # pre-process config  
    15.	    print('--> config model')  
    16.	    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], reorder_channel='0 1 2', target_platform='rv1126',  
    17.	    quantized_dtype='asymmetric_affine-u8', optimization_level=3,   output_optimize=1)  
    18.	    print('done')  
    19.	  
    20.	    print('--> Loading model')  
    21.	    ret = rknn.load_onnx(model=ONNX_MODEL)  
    22.	    if ret != 0:  
    23.	        print('Load model  failed!')  
    24.	        exit(ret)  
    25.	    print('done')  
    26.	  
    27.	    # Build model  
    28.	    print('--> Building model')  
    29.	    ret = rknn.build(do_quantization=True, dataset='./dataset.txt')  
    30.	    if ret != 0:  
    31.	        print('Build yolov5s failed!')  
    32.	        exit(ret)  
    33.	    print('done')  
    34.	  
    35.	    # Export rknn model  
    36.	    print('--> Export RKNN model')  
    37.	    ret = rknn.export_rknn(RKNN_MODEL)  
    38.	    if ret != 0:  
    39.	        print('Export yolov5s.rknn failed!')  
    40.	        exit(ret)  
    41.	    print('done')  
    42.	  
    43.    rknn.release()  
    

    2.C++代码

    这里以瑞芯微的代码为例,只不过在前面和后面加上了有关padding resize的处理,瑞芯微的具体代码见rknpu/rknn/rknn_api/examples/rknn_yolov5_demo at master · rockchip-linux/rknpu · GitHub

    2.1 padding resize代码

    瑞芯微的官方demo里面是用opencl读取图片,这里我改成了用opencv读取图片,关于opencv的交叉编译见:ubuntu交叉编译RV1126的opencv库/ubuntu交叉编译opencv

    void padding_resize(cv::InputArray src, cv::OutputArray dst, cv::Size size) {
            float padd_w = 0;
            float padd_h = 0;
            float r = std::min(float(size.width) / src.cols(), float(size.height) / src.rows());
            int inside_w = round(src.cols() * r);
            int inside_h = round(src.rows() * r);
            padd_w = size.width - inside_w;
            padd_h = size.height - inside_h;
            cv::resize(src, dst, cv::Size(inside_w, inside_h));
            padd_w = padd_w / 2;
            padd_h = padd_h / 2;
            //外层边框填充灰色
            printf("gain:%f, padd_w:%f,padd_h:%f. in padding_resize============\n", r, padd_w,padd_h);
            int top = int(round(padd_h - 0.1));
            int bottom = int(round(padd_h + 0.1));
            int left = int(round(padd_w - 0.1));
            int right = int(round(padd_w + 0.1));
            cv::copyMakeBorder(dst, dst, top, bottom, left, right, cv::BORDER_CONSTANT, cv::Scalar(114, 114, 114));
        }
     
    cv::Mat read_image(const char *image_path, int w, int h, int & img_width, int & img_height, cv::Mat & img) 
    {
        img = cv::imread(image_path);
        img_width = img.cols;
        img_height = img.rows;
        cv::Mat sample_resize, sample;
        padding_resize(img, sample_resize, cv::Size(w, h));
        //cv::resize(img, sample_resize, cv::Size(w, h));
        cv::cvtColor(sample_resize, sample, cv::COLOR_BGR2RGB);
        return sample;
    }

    2.2 yolov5-face后处理(在瑞芯微yolov5后处理代码基础上修改)

    这是瑞芯微的yolov5 demo里的代码,这里面我把坐标框后处理时影射回原图的四行代码注释掉了。然后其他代码也针对yolov5-face进行了修改,其中process这里增加了关键点的解析。

    static int process(uint8_t *input, int *anchor, int grid_h, int grid_w, int height, int width, int stride,
                       std::vector<float> &boxes, std::vector<float> &landMarks, std::vector<float> &objProbs, std::vector<int> &classId, 
                       float threshold, uint32_t zp, float scale)
    {
    
        int validCount = 0;
        int grid_len = grid_h * grid_w;//80*80, 40*40, 20*20.
        float thres = unsigmoid(threshold);
        uint8_t thres_u8 = qnt_f32_to_affine(thres, zp, scale);//量化
        for (int a = 0; a < 3; a++)
        {
            for (int i = 0; i < grid_h; i++)
            {
                for (int j = 0; j < grid_w; j++)
                {
                    /**********************************************************************************************
                    排列顺序是x y w h boxScore x1 y1 x2 y2 x3 y3 x4 y4 x5 y5 idScore,然后保存的时候是先存
                    grid_h * grid_w个x,再存grid_h * grid_w个y,....,这样保存的。
                    而不是先存第一个x y w h boxScore x1 y1  x2 y2 x3 y3 x4 y4 x5 y5 idScore,
                    再存第二个x y w h boxScore x1 y1  x2 y2 x3 y3 x4 y4 x5 y5 idScore,
                    **********************************************************************************************/
                    uint8_t box_confidence = input[(PROP_BOX_SIZE * a + 4) * grid_len + i * grid_w + j];
                    if (box_confidence >= thres_u8)//thres_u8经过反sigmoid和反量化操作了。
                    {
                        int offset = (PROP_BOX_SIZE * a) * grid_len + i * grid_w + j;//
                        uint8_t *in_ptr = input + offset; 
                        /*这边乘以2,减掉0.5是因为内部运算的时候做了这个处理,所以这里要反操作一下,
                        deqnt_affine_to_f32是反量化,zp是量化时的zero point中心点,scale是量化时的尺度。*/
                        float box_x = sigmoid(deqnt_affine_to_f32(*in_ptr, zp, scale)) * 2.0 - 0.5;
                        float box_y = sigmoid(deqnt_affine_to_f32(in_ptr[grid_len], zp, scale)) * 2.0 - 0.5;
                        float box_w = sigmoid(deqnt_affine_to_f32(in_ptr[2 * grid_len], zp, scale)) * 2.0;
                        float box_h = sigmoid(deqnt_affine_to_f32(in_ptr[3 * grid_len], zp, scale)) * 2.0;
                        
                        float x1 = (deqnt_affine_to_f32(in_ptr[5 * grid_len], zp, scale));
                        float y1 = (deqnt_affine_to_f32(in_ptr[6 * grid_len], zp, scale));
                        float x2 = (deqnt_affine_to_f32(in_ptr[7 * grid_len], zp, scale));
                        float y2 = (deqnt_affine_to_f32(in_ptr[8 * grid_len], zp, scale));
                        float x3 = (deqnt_affine_to_f32(in_ptr[9 * grid_len], zp, scale));
                        float y3 = (deqnt_affine_to_f32(in_ptr[10 * grid_len], zp, scale));
                        float x4 = (deqnt_affine_to_f32(in_ptr[11 * grid_len], zp, scale));
                        float y4 = (deqnt_affine_to_f32(in_ptr[12 * grid_len], zp, scale));
                        float x5 = (deqnt_affine_to_f32(in_ptr[13 * grid_len], zp, scale));
                        float y5 = (deqnt_affine_to_f32(in_ptr[14 * grid_len], zp, scale));
                        std::cout<<"landmark after deqnt_affine_to_f32:"<<std::endl;
                        std::cout<<"x1:"<<x1<<"  y1:"<<y1<<std::endl;
                        std::cout<<"x2:"<<x2<<"  y2:"<<y2<<std::endl;
                        std::cout<<"x3:"<<x3<<"  y3:"<<y3<<std::endl;
                        std::cout<<"x4:"<<x4<<"  y4:"<<y4<<std::endl;
                        std::cout<<"x5:"<<x5<<"  y5:"<<y5<<std::endl;
    
                        box_x = (box_x + j) * (float)stride;//这边的box_x是0-1的,相当于是偏移值,所以要加上j。乘以stride是映射回原图640.
                        box_y = (box_y + i) * (float)stride;//这边的box_y是0-1的,相当于是偏移值,所以要加上i。乘以stride是映射回原图640.
                        box_w = box_w * box_w * (float)anchor[a * 2];
                        box_h = box_h * box_h * (float)anchor[a * 2 + 1];
                        box_x -= (box_w / 2.0);
                        box_y -= (box_h / 2.0);
                        boxes.push_back(box_x);
                        boxes.push_back(box_y);
                        boxes.push_back(box_w);
                        boxes.push_back(box_h);
                   
                       x1 = x1 * (float)anchor[a * 2]     + j*(float)stride;
                       y1 = y1 * (float)anchor[a * 2 + 1] + i*(float)stride;
                       x2 = x2 * (float)anchor[a * 2]     + j*(float)stride;
                       y2 = y2 * (float)anchor[a * 2 + 1] + i*(float)stride;
                       x3 = x3 * (float)anchor[a * 2]     + j*(float)stride;
                       y3 = y3 * (float)anchor[a * 2 + 1] + i*(float)stride;
                       x4 = x4 * (float)anchor[a * 2]     + j*(float)stride;
                       y4 = y4 * (float)anchor[a * 2 + 1] + i*(float)stride;
                       x5 = x5 * (float)anchor[a * 2]     + j*(float)stride;
                       y5 = y5 * (float)anchor[a * 2 + 1] + i*(float)stride;
    
                       landMarks.push_back(x1);
                       landMarks.push_back(y1);
                       landMarks.push_back(x2);
                       landMarks.push_back(y2);
                       landMarks.push_back(x3);
                       landMarks.push_back(y3);
                       landMarks.push_back(x4);
                       landMarks.push_back(y4);
                       landMarks.push_back(x5);
                       landMarks.push_back(y5);
                       
    
                        //  printf("box_x=%.03f, box_y=%.03f, box_w=%.03f, box_h=%.03f\n"
                        //  , box_x, box_y, box_w, box_h);
    
                        uint8_t maxClassProbs = in_ptr[15 * grid_len];
                        int maxClassId = 0;
                        
                        for (int k = 0; k < OBJ_CLASS_NUM; ++k)
                        {
                            uint8_t prob = in_ptr[(15 + k) * grid_len];//这里是通过比较找到一个最高的得分和他的id。
                            if (prob > maxClassProbs)
                            {
                                maxClassId = k;
                                maxClassProbs = prob;
                            }
                        }
                        
                        objProbs.push_back(sigmoid(deqnt_affine_to_f32(maxClassProbs, zp, scale)));
                        classId.push_back(maxClassId);
                        
    
                        //boxes   objProbs  classId这三个变量往外返回的时候应该是每四个box对应一个objProb和一个classId。
                        //std::cout<<"maxClassProbs in post_process:::"<<sigmoid(deqnt_affine_to_f32(maxClassProbs, zp, scale))<<std::endl;
                        //std::cout<<"maxClassId in post_process::"<<maxClassId<<std::endl;
                        validCount++;
    
                    }
                }
            }
        }
        return validCount;
    }

    3 RV1126驱动升级

    由于前面使用rknntoolkit 1.7.1进行模型转换的,所以这里需要把RV1126板子的驱动由1.6.1升级为1.7.0,具体方法如下:rv1126更新驱动版本库_cumtchw-CSDN博客

    1.	把https://github.com/rockchip-linux/rknpu/tree/master/drivers/linux-armhf-puma/usr push到板子的对应目录。  
    2.	把https://github.com/rockchip-linux/rknpu/blob/master/drivers/npu_ko/galcore_puma.ko改名为galcore.ko, push到板子/lib/modules/galcore.ko。可以先在板子上find下原来galcore.ko的位置。  
    3.	重启板子。  
    

    升级完成之后需要到/usr/lib目录下用下面三个命令做一下软链接。

    1.	[root@RV1126_RV1109:/usr/lib]# ln -snf libOpenVX.so.1.2 libOpenVX.so.1  
    2.	[root@RV1126_RV1109:/usr/lib]# ln -snf libOpenVX.so.1 libOpenVX.so  
    3.	[root@RV1126_RV1109:/usr/lib]# ln -snf libOpenCL.so.1.2 libOpenCL.so.1  
    

    4.完整的C++代码

    4.1 postprocess.h 

    #ifndef _RKNN_ZERO_COPY_DEMO_POSTPROCESS_H_
    #define _RKNN_ZERO_COPY_DEMO_POSTPROCESS_H_
    
    #include <stdint.h>
    #include <opencv.hpp>
    
    #define OBJ_NAME_MAX_SIZE 16
    #define OBJ_NUMB_MAX_SIZE 64
    
    //#define OBJ_CLASS_NUM     2 //80
    #define OBJ_CLASS_NUM     1 //yolov5-face的类别数是1.
    
    #define NMS_THRESH        0.3
    #define BOX_THRESH        0.87   //0.5
    
    // #define NMS_THRESH        0.5
    // #define BOX_THRESH        0.3   //0.5
    
    #define LAND_MARK_SIZE    10//五个关键点,每个关键点有两个坐标x,y。
    #define PROP_BOX_SIZE     (5 + OBJ_CLASS_NUM + LAND_MARK_SIZE) //16
    //#define PROP_BOX_SIZE     (5+OBJ_CLASS_NUM) //10
    
    #define REPVGG_CLASS_NUM  27 //信号灯分类数
    extern char *repvgg_labels[REPVGG_CLASS_NUM];
    
    typedef struct _BOX_RECT
    {
        int left;
        int right;
        int top;
        int bottom;
        //std::vector<cv::Point2f> landmarkVec;
    } BOX_RECT;
    
    
    typedef struct __detect_result_t
    {
        char name[OBJ_NAME_MAX_SIZE];
        BOX_RECT box;
        std::vector<float> landmark;//用于保存人脸关键点。
        float prop;
    } detect_result_t;
    
    typedef struct _detect_result_group_t
    {
        int id;
        int count;
        detect_result_t results[OBJ_NUMB_MAX_SIZE];
    } detect_result_group_t;
    
    int post_process(uint8_t *input0, uint8_t *input1, uint8_t *input2, int model_in_h, int model_in_w,
                     float conf_threshold, float nms_threshold, float scale_w, float scale_h,
                     std::vector<uint32_t> &qnt_zps, std::vector<float> &qnt_scales,
                     detect_result_group_t *group);
    
    
    int rknn_GetTop(
        float *pfProb,
        float *pfMaxProb,
        uint32_t *pMaxClass,
        uint32_t outputCount,
        uint32_t topNum);
    
    int loadLabelName(const char *locationFilename, char *label[], int classNum);
    
    #endif //_RKNN_ZERO_COPY_DEMO_POSTPROCESS_H_
    

    4.2 postprocess.cc

    // Copyright (c) 2021 by Rockchip Electronics Co., Ltd. All Rights Reserved.
    //
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    //     http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    #include <stdio.h>
    #include <stdlib.h>
    #include <math.h>
    #include <string.h>
    #include <sys/time.h>
    #include <vector>
    #include <set>
    #include "postprocess.h"
    #include <stdint.h>
    #define LABEL_NALE_TXT_PATH "./model/coco_80_labels_list.txt"
    
    static char *labels[OBJ_CLASS_NUM];
    char *repvgg_labels[REPVGG_CLASS_NUM];
    
    //这是之前yolov5的anchor
    // const int anchor0[6] = {10, 13, 16, 30, 33, 23};
    // const int anchor1[6] = {30, 61, 62, 45, 59, 119};
    // const int anchor2[6] = {116, 90, 156, 198, 373, 326};
    
    //这是yolov5-face的anchor.
    const int anchor0[6] = {4, 5, 8, 10, 13, 16};
    const int anchor1[6] = {23, 29, 43, 55, 73, 105};
    const int anchor2[6] = {146, 217, 231, 300, 335, 433};
    
    inline static int clamp(float val, int min, int max)
    {
        return val > min ? (val < max ? val : max) : min;
    }
    
    char *readLine(FILE *fp, char *buffer, int *len)
    {
        int ch;
        int i = 0;
        size_t buff_len = 0;
    
        buffer = (char *)malloc(buff_len + 1);
        if (!buffer)
            return NULL; // Out of memory
    
        while ((ch = fgetc(fp)) != '\n' && ch != EOF)
        {
            buff_len++;
            void *tmp = realloc(buffer, buff_len + 1);
            if (tmp == NULL)
            {
                free(buffer);
                return NULL; // Out of memory
            }
            buffer = (char *)tmp;
    
            buffer[i] = (char)ch;
            i++;
        }
        buffer[i] = '\0';
    
        *len = buff_len;
    
        // Detect end
        if (ch == EOF && (i == 0 || ferror(fp)))
        {
            free(buffer);
            return NULL;
        }
        return buffer;
    }
    
    int readLines(const char *fileName, char *lines[], int max_line)
    {
        FILE *file = fopen(fileName, "r");
        char *s;
        int i = 0;
        int n = 0;
        while ((s = readLine(file, s, &n)) != NULL)
        {
            lines[i++] = s;
            if (i >= max_line)
                break;
        }
        return i;
    }
    
    int loadLabelName(const char *locationFilename, char *label[], int classNum)
    {
        printf("loadLabelName %s\n", locationFilename);
        readLines(locationFilename, label, classNum);
        return 0;
    }
    
    static float CalculateOverlap(float xmin0, float ymin0, float xmax0, float ymax0, float xmin1, float ymin1, float xmax1, float ymax1)
    {
        float w = fmax(0.f, fmin(xmax0, xmax1) - fmax(xmin0, xmin1) + 1.0);
        float h = fmax(0.f, fmin(ymax0, ymax1) - fmax(ymin0, ymin1) + 1.0);
        float i = w * h;
        float u = (xmax0 - xmin0 + 1.0) * (ymax0 - ymin0 + 1.0) + (xmax1 - xmin1 + 1.0) * (ymax1 - ymin1 + 1.0) - i;
        return u <= 0.f ? 0.f : (i / u);
    }
    
    static int nms(int validCount, std::vector<float> &outputLocations, std::vector<int> classIds, std::vector<int> &order, int filterId, float threshold)
    {
        for (int i = 0; i < validCount; ++i)
        {
            if (order[i] == -1 || classIds[i] != filterId)
            {
                continue;
            }
            int n = order[i];
            for (int j = i + 1; j < validCount; ++j)
            {
                int m = order[j];
                if (m == -1 || classIds[i] != filterId)
                {
                    continue;
                }
                float xmin0 = outputLocations[n * 4 + 0];
                float ymin0 = outputLocations[n * 4 + 1];
                float xmax0 = outputLocations[n * 4 + 0] + outputLocations[n * 4 + 2];
                float ymax0 = outputLocations[n * 4 + 1] + outputLocations[n * 4 + 3];
    
                float xmin1 = outputLocations[m * 4 + 0];
                float ymin1 = outputLocations[m * 4 + 1];
                float xmax1 = outputLocations[m * 4 + 0] + outputLocations[m * 4 + 2];
                float ymax1 = outputLocations[m * 4 + 1] + outputLocations[m * 4 + 3];
    
                float iou = CalculateOverlap(xmin0, ymin0, xmax0, ymax0, xmin1, ymin1, xmax1, ymax1);
    
                if (iou > threshold)
                {
                    order[j] = -1;
                }
            }
        }
        return 0;
    }
    
    static int quick_sort_indice_inverse(
        std::vector<float> &input,
        int left,
        int right,
        std::vector<int> &indices)
    {
        float key;
        int key_index;
        int low = left;
        int high = right;
        if (left < right)
        {
            key_index = indices[left];
            key = input[left];
            while (low < high)
            {
                while (low < high && input[high] <= key)
                {
                    high--;
                }
                input[low] = input[high];
                indices[low] = indices[high];
                while (low < high && input[low] >= key)
                {
                    low++;
                }
                input[high] = input[low];
                indices[high] = indices[low];
            }
            input[low] = key;
            indices[low] = key_index;
            quick_sort_indice_inverse(input, left, low - 1, indices);
            quick_sort_indice_inverse(input, low + 1, right, indices);
        }
        return low;
    }
    
    static float sigmoid(float x)
    {
        return 1.0 / (1.0 + expf(-x));
    }
    
    static float unsigmoid(float y)
    {
        return -1.0 * logf((1.0 / y) - 1.0);
    }
    
    inline static int32_t __clip(float val, float min, float max)
    {
        float f = val <= min ? min : (val >= max ? max : val);
        return f;
    }
    
    static uint8_t qnt_f32_to_affine(float f32, uint32_t zp, float scale)
    {
        float dst_val = (f32 / scale) + zp;
        uint8_t res = (uint8_t)__clip(dst_val, 0, 255);
        return res;
    }
    
    static float deqnt_affine_to_f32(uint8_t qnt, uint32_t zp, float scale)
    {
        return ((float)qnt - (float)zp) * scale;
    }
    
    
    static int process(uint8_t *input, int *anchor, int grid_h, int grid_w, int height, int width, int stride,
                       std::vector<float> &boxes, std::vector<float> &landMarks, std::vector<float> &objProbs, std::vector<int> &classId, 
                       float threshold, uint32_t zp, float scale)
    {
    
        int validCount = 0;
        int grid_len = grid_h * grid_w;//80*80, 40*40, 20*20.
        float thres = unsigmoid(threshold);
        uint8_t thres_u8 = qnt_f32_to_affine(thres, zp, scale);//量化
        for (int a = 0; a < 3; a++)
        {
            for (int i = 0; i < grid_h; i++)
            {
                for (int j = 0; j < grid_w; j++)
                {
                    /**********************************************************************************************
                    排列顺序是x y w h boxScore x1 y1 x2 y2 x3 y3 x4 y4 x5 y5 idScore,然后保存的时候是先存
                    grid_h * grid_w个x,再存grid_h * grid_w个y,....,这样保存的。
                    而不是先存第一个x y w h boxScore x1 y1  x2 y2 x3 y3 x4 y4 x5 y5 idScore,
                    再存第二个x y w h boxScore x1 y1  x2 y2 x3 y3 x4 y4 x5 y5 idScore,
                    **********************************************************************************************/
                    uint8_t box_confidence = input[(PROP_BOX_SIZE * a + 4) * grid_len + i * grid_w + j];
                    if (box_confidence >= thres_u8)//thres_u8经过反sigmoid和反量化操作了。
                    {
                        int offset = (PROP_BOX_SIZE * a) * grid_len + i * grid_w + j;//
                        uint8_t *in_ptr = input + offset; 
                        /*这边乘以2,减掉0.5是因为内部运算的时候做了这个处理,所以这里要反操作一下,
                        deqnt_affine_to_f32是反量化,zp是量化时的zero point中心点,scale是量化时的尺度。*/
                        float box_x = sigmoid(deqnt_affine_to_f32(*in_ptr, zp, scale)) * 2.0 - 0.5;
                        float box_y = sigmoid(deqnt_affine_to_f32(in_ptr[grid_len], zp, scale)) * 2.0 - 0.5;
                        float box_w = sigmoid(deqnt_affine_to_f32(in_ptr[2 * grid_len], zp, scale)) * 2.0;
                        float box_h = sigmoid(deqnt_affine_to_f32(in_ptr[3 * grid_len], zp, scale)) * 2.0;
                        
                        float x1 = (deqnt_affine_to_f32(in_ptr[5 * grid_len], zp, scale));
                        float y1 = (deqnt_affine_to_f32(in_ptr[6 * grid_len], zp, scale));
                        float x2 = (deqnt_affine_to_f32(in_ptr[7 * grid_len], zp, scale));
                        float y2 = (deqnt_affine_to_f32(in_ptr[8 * grid_len], zp, scale));
                        float x3 = (deqnt_affine_to_f32(in_ptr[9 * grid_len], zp, scale));
                        float y3 = (deqnt_affine_to_f32(in_ptr[10 * grid_len], zp, scale));
                        float x4 = (deqnt_affine_to_f32(in_ptr[11 * grid_len], zp, scale));
                        float y4 = (deqnt_affine_to_f32(in_ptr[12 * grid_len], zp, scale));
                        float x5 = (deqnt_affine_to_f32(in_ptr[13 * grid_len], zp, scale));
                        float y5 = (deqnt_affine_to_f32(in_ptr[14 * grid_len], zp, scale));
                        std::cout<<"landmark after deqnt_affine_to_f32:"<<std::endl;
                        std::cout<<"x1:"<<x1<<"  y1:"<<y1<<std::endl;
                        std::cout<<"x2:"<<x2<<"  y2:"<<y2<<std::endl;
                        std::cout<<"x3:"<<x3<<"  y3:"<<y3<<std::endl;
                        std::cout<<"x4:"<<x4<<"  y4:"<<y4<<std::endl;
                        std::cout<<"x5:"<<x5<<"  y5:"<<y5<<std::endl;
    
                        box_x = (box_x + j) * (float)stride;//这边的box_x是0-1的,相当于是偏移值,所以要加上j。乘以stride是映射回原图640.
                        box_y = (box_y + i) * (float)stride;//这边的box_y是0-1的,相当于是偏移值,所以要加上i。乘以stride是映射回原图640.
                        box_w = box_w * box_w * (float)anchor[a * 2];
                        box_h = box_h * box_h * (float)anchor[a * 2 + 1];
                        box_x -= (box_w / 2.0);
                        box_y -= (box_h / 2.0);
                        boxes.push_back(box_x);
                        boxes.push_back(box_y);
                        boxes.push_back(box_w);
                        boxes.push_back(box_h);
                   
                       x1 = x1 * (float)anchor[a * 2]     + j*(float)stride;
                       y1 = y1 * (float)anchor[a * 2 + 1] + i*(float)stride;
                       x2 = x2 * (float)anchor[a * 2]     + j*(float)stride;
                       y2 = y2 * (float)anchor[a * 2 + 1] + i*(float)stride;
                       x3 = x3 * (float)anchor[a * 2]     + j*(float)stride;
                       y3 = y3 * (float)anchor[a * 2 + 1] + i*(float)stride;
                       x4 = x4 * (float)anchor[a * 2]     + j*(float)stride;
                       y4 = y4 * (float)anchor[a * 2 + 1] + i*(float)stride;
                       x5 = x5 * (float)anchor[a * 2]     + j*(float)stride;
                       y5 = y5 * (float)anchor[a * 2 + 1] + i*(float)stride;
    
                       landMarks.push_back(x1);
                       landMarks.push_back(y1);
                       landMarks.push_back(x2);
                       landMarks.push_back(y2);
                       landMarks.push_back(x3);
                       landMarks.push_back(y3);
                       landMarks.push_back(x4);
                       landMarks.push_back(y4);
                       landMarks.push_back(x5);
                       landMarks.push_back(y5);
                       
    
                        //  printf("box_x=%.03f, box_y=%.03f, box_w=%.03f, box_h=%.03f\n"
                        //  , box_x, box_y, box_w, box_h);
    
                        uint8_t maxClassProbs = in_ptr[15 * grid_len];
                        int maxClassId = 0;
                        
                        for (int k = 0; k < OBJ_CLASS_NUM; ++k)
                        {
                            uint8_t prob = in_ptr[(15 + k) * grid_len];//这里是通过比较找到一个最高的得分和他的id。
                            if (prob > maxClassProbs)
                            {
                                maxClassId = k;
                                maxClassProbs = prob;
                            }
                        }
                        
                        objProbs.push_back(sigmoid(deqnt_affine_to_f32(maxClassProbs, zp, scale)));
                        classId.push_back(maxClassId);
                        
    
                        //boxes   objProbs  classId这三个变量往外返回的时候应该是每四个box对应一个objProb和一个classId。
                        //std::cout<<"maxClassProbs in post_process:::"<<sigmoid(deqnt_affine_to_f32(maxClassProbs, zp, scale))<<std::endl;
                        //std::cout<<"maxClassId in post_process::"<<maxClassId<<std::endl;
                        validCount++;
    
                    }
                }
            }
        }
        return validCount;
    }
    
    int post_process(uint8_t *input0, uint8_t *input1, uint8_t *input2, int model_in_h, int model_in_w,
                     float conf_threshold, float nms_threshold, float scale_w, float scale_h,
                     std::vector<uint32_t> &qnt_zps, std::vector<float> &qnt_scales,
                     detect_result_group_t *group)
    {
        static int init = -1;
        if (init == -1)
        {
            int ret = 0;
            ret = loadLabelName(LABEL_NALE_TXT_PATH, labels, OBJ_CLASS_NUM);
            if (ret < 0)
            {
                return -1;
            }
    
            init = 0;
        }
        memset(group, 0, sizeof(detect_result_group_t));
    
        std::vector<float> filterBoxes;
        std::vector<float> landMarks;
        std::vector<float> objProbs;
        std::vector<int> classId;
    
        // stride 8
        int stride0 = 8;
        int grid_h0 = model_in_h / stride0;//80*80
        int grid_w0 = model_in_w / stride0;
        int validCount0 = 0;
        validCount0 = process(input0, (int *)anchor0, grid_h0, grid_w0, model_in_h, model_in_w,
                              stride0, filterBoxes, landMarks, objProbs, classId, conf_threshold, qnt_zps[0], qnt_scales[0]);
        
        // stride 16
        int stride1 = 16;
        int grid_h1 = model_in_h / stride1;//40*40
        int grid_w1 = model_in_w / stride1;
        int validCount1 = 0;
        validCount1 = process(input1, (int *)anchor1, grid_h1, grid_w1, model_in_h, model_in_w,
                              stride1, filterBoxes, landMarks, objProbs, classId, conf_threshold, qnt_zps[1], qnt_scales[1]);
    
        // stride 32
        int stride2 = 32;
        int grid_h2 = model_in_h / stride2;//20*20
        int grid_w2 = model_in_w / stride2;
        int validCount2 = 0;
        validCount2 = process(input2, (int *)anchor2, grid_h2, grid_w2, model_in_h, model_in_w,
                              stride2, filterBoxes, landMarks, objProbs, classId,     conf_threshold, qnt_zps[2], qnt_scales[2]);
    
        int validCount = validCount0 + validCount1 + validCount2;
        printf("validCount=%d\n", validCount);
        // no object detect
        if (validCount <= 0)
        {
            return 0;
        }
    
        std::vector<int> indexArray;
        for (int i = 0; i < validCount; ++i)
        {
            indexArray.push_back(i);//第i个是i,这是做什么用的。
        }
    
        quick_sort_indice_inverse(objProbs, 0, validCount - 1, indexArray);//猜测这个函数是按照得分把index进行了排序。
    
        std::set<int> class_set(std::begin(classId), std::end(classId));
    
        for (auto c : class_set)//c是int类型的。
        { 
            std::cout<<"c:::"<<c<<std::endl;
            //static int nms(int validCount, std::vector<float> &outputLocations, std::vector<int> classIds, std::vector<int> &order, int filterId, float threshold)
            nms(validCount, filterBoxes, classId, indexArray, c, nms_threshold);
        }
    
        int last_count = 0;
        group->count = 0;
        /* box valid detect target */
        for (int i = 0; i < validCount; ++i)
        {
    
            if (indexArray[i] == -1 || i >= OBJ_NUMB_MAX_SIZE)
            {
                continue;
            }
            int n = indexArray[i];
    
            float x1 = filterBoxes[n * 4 + 0];
            float y1 = filterBoxes[n * 4 + 1];
            float x2 = x1 + filterBoxes[n * 4 + 2];
            float y2 = y1 + filterBoxes[n * 4 + 3];
            int id = classId[n];
            float obj_conf = objProbs[i];
    
            float landmark_x1 = landMarks[n * 10 + 0];
            float landmark_y1 = landMarks[n * 10 + 1];
            float landmark_x2 = landMarks[n * 10 + 2];
            float landmark_y2 = landMarks[n * 10 + 3];
            float landmark_x3 = landMarks[n * 10 + 4];
            float landmark_y3 = landMarks[n * 10 + 5];
            float landmark_x4 = landMarks[n * 10 + 6];
            float landmark_y4 = landMarks[n * 10 + 7];
            float landmark_x5 = landMarks[n * 10 + 8];
            float landmark_y5 = landMarks[n * 10 + 9];
    
    
            // group->results[last_count].box.left = (int)(clamp(x1, 0, model_in_w) / scale_w);
            // group->results[last_count].box.top = (int)(clamp(y1, 0, model_in_h) / scale_h);
            // group->results[last_count].box.right = (int)(clamp(x2, 0, model_in_w) / scale_w);
            // group->results[last_count].box.bottom = (int)(clamp(y2, 0, model_in_h) / scale_h);
    
            group->results[last_count].box.left = (int)(clamp(x1, 0, model_in_w) );//clamp处理是否越界的。
            group->results[last_count].box.top = (int)(clamp(y1, 0, model_in_h) );
            group->results[last_count].box.right = (int)(clamp(x2, 0, model_in_w) );
            group->results[last_count].box.bottom = (int)(clamp(y2, 0, model_in_h) );
            
            group->results[last_count].landmark.push_back((int)(clamp(landmark_x1, 0, model_in_w)));
            group->results[last_count].landmark.push_back((int)(clamp(landmark_y1, 0, model_in_h)));
            group->results[last_count].landmark.push_back((int)(clamp(landmark_x2, 0, model_in_w)));
            group->results[last_count].landmark.push_back((int)(clamp(landmark_y2, 0, model_in_h)));       
            group->results[last_count].landmark.push_back((int)(clamp(landmark_x3, 0, model_in_w)));
            group->results[last_count].landmark.push_back((int)(clamp(landmark_y3, 0, model_in_h)));
            group->results[last_count].landmark.push_back((int)(clamp(landmark_x4, 0, model_in_w)));
            group->results[last_count].landmark.push_back((int)(clamp(landmark_y4, 0, model_in_h)));
            group->results[last_count].landmark.push_back((int)(clamp(landmark_x5, 0, model_in_w)));
            group->results[last_count].landmark.push_back((int)(clamp(landmark_y5, 0, model_in_h)));
    
            group->results[last_count].prop = obj_conf;
            char *label = labels[id];
            strncpy(group->results[last_count].name, label, OBJ_NAME_MAX_SIZE);
    
            // printf("result %2d: (%4d, %4d, %4d, %4d), %s\n", i, group->results[last_count].box.left, group->results[last_count].box.top,
            //        group->results[last_count].box.right, group->results[last_count].box.bottom, label);
            last_count++;
        }
        group->count = last_count;
    
        return 0;
    }
    
    
    int rknn_GetTop(
        float *pfProb,
        float *pfMaxProb,
        uint32_t *pMaxClass,
        uint32_t outputCount,
        uint32_t topNum)
    {
        uint32_t i, j;
    
    #define MAX_TOP_NUM 20
        if (topNum > MAX_TOP_NUM)
            return 0;
    
        memset(pfMaxProb, 0, sizeof(float) * topNum);
        memset(pMaxClass, 0xff, sizeof(float) * topNum);
    
        for (j = 0; j < topNum; j++)
        {
            for (i = 0; i < outputCount; i++)
            {
                if ((i == *(pMaxClass + 0)) || (i == *(pMaxClass + 1)) || (i == *(pMaxClass + 2)) ||
                    (i == *(pMaxClass + 3)) || (i == *(pMaxClass + 4)))
                {
                    continue;
                }
    
                if (pfProb[i] > *(pfMaxProb + j))
                {
                    *(pfMaxProb + j) = pfProb[i];
                    *(pMaxClass + j) = i;
                }
            }
        }
    
        return 1;
    }

     4.3 main.cc

    // Copyright (c) 2021 by Rockchip Electronics Co., Ltd. All Rights Reserved.
    //
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    //     http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.
    
    /*-------------------------------------------
                    Includes
    -------------------------------------------*/
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/time.h>
    #include <dlfcn.h>
    #include <vector>
    #include <opencv.hpp>
    
    #define _BASETSD_H
    
    #define STB_IMAGE_IMPLEMENTATION
    #include "stb/stb_image.h"
    #define STB_IMAGE_RESIZE_IMPLEMENTATION
    #include <stb/stb_image_resize.h>
    
    #undef cimg_display
    #define cimg_display 0
    #include "CImg/CImg.h"
    
    #include "drm_func.h"
    #include "rga_func.h"
    #include "rknn_api.h"
    #include "postprocess.h"
    
    #include "async_client.h"
    #include "mqtt.h"
    
    using namespace cv;
    
    #define PERF_WITH_POST 1
    #define REPVGG_LABEL_NALE_TXT_PATH "./model/repvgg_label.txt"
    
    #define  logchw printf("function:%s,line:%d=========\n", __FUNCTION__, __LINE__);
    
    
    using namespace std;
    const string DFLT_SERVER_ADDRESS	{ "tcp://192.168.1.10:1883" };  
    const string CLIENT_ID				{ "paho_cpp_async_publish" };
    const string PERSIST_DIR			{ "./persist" };
    const string TOPIC { "hello" };
    const char* PAYLOAD1 = "Hello World!";
    const char* PAYLOAD2 = "Hi there!";
    const char* PAYLOAD3 = "Is anyone listening?";
    const char* PAYLOAD4 = "Someone is always listening.";
    const char* LWT_PAYLOAD = "Last will and testament.";
    const int  QOS = 1;
    const auto TIMEOUT = std::chrono::seconds(10);
    
    
    
    
    using namespace cimg_library;
    /*-------------------------------------------
                      Functions
    -------------------------------------------*/
    static void printRKNNTensor(rknn_tensor_attr *attr)
    {
        printf("index=%d name=%s n_dims=%d dims=[%d %d %d %d] n_elems=%d size=%d fmt=%d type=%d qnt_type=%d fl=%d zp=%d scale=%f\n",
               attr->index, attr->name, attr->n_dims, attr->dims[3], attr->dims[2], attr->dims[1], attr->dims[0],
               attr->n_elems, attr->size, 0, attr->type, attr->qnt_type, attr->fl, attr->zp, attr->scale);
    }
    
    inline const char* get_type_string(rknn_tensor_type type)
    {
        switch(type) {
        case RKNN_TENSOR_FLOAT32: return "FP32";
        case RKNN_TENSOR_FLOAT16: return "FP16";
        case RKNN_TENSOR_INT8: return "INT8";
        case RKNN_TENSOR_UINT8: return "UINT8";
        case RKNN_TENSOR_INT16: return "INT16";
        default: return "UNKNOW";
        }
    }
    
    inline const char* get_qnt_type_string(rknn_tensor_qnt_type type)
    {
        switch(type) {
        case RKNN_TENSOR_QNT_NONE: return "NONE";
        case RKNN_TENSOR_QNT_DFP: return "DFP";
        case RKNN_TENSOR_QNT_AFFINE_ASYMMETRIC: return "AFFINE";
        default: return "UNKNOW";
        }
    }
    
    inline const char* get_format_string(rknn_tensor_format fmt)
    {
        switch(fmt) {
        case RKNN_TENSOR_NCHW: return "NCHW";
        case RKNN_TENSOR_NHWC: return "NHWC";
        default: return "UNKNOW";
        }
    }
    
    static void dump_tensor_attr(rknn_tensor_attr *attr)
    {
        printf("  index=%d, name=%s, n_dims=%d, dims=[%d, %d, %d, %d], n_elems=%d, size=%d, fmt=%s, type=%s, qnt_type=%s, "
               "zp=%d, scale=%f\n",
               attr->index, attr->name, attr->n_dims, attr->dims[3], attr->dims[2], attr->dims[1], attr->dims[0],
               attr->n_elems, attr->size, get_format_string(attr->fmt), get_type_string(attr->type),
               get_qnt_type_string(attr->qnt_type), attr->zp, attr->scale);
    }
    
    double __get_us(struct timeval t) { return (t.tv_sec * 1000000 + t.tv_usec); }
    
    static unsigned char *load_data(FILE *fp, size_t ofst, size_t sz)
    {
        unsigned char *data;
        int ret;
    
        data = NULL;
    
        if (NULL == fp)
        {
            return NULL;
        }
    
        ret = fseek(fp, ofst, SEEK_SET);
        if (ret != 0)
        {
            printf("blob seek failure.\n");
            return NULL;
        }
    
        data = (unsigned char *)malloc(sz);
        if (data == NULL)
        {
            printf("buffer malloc failure.\n");
            return NULL;
        }
        ret = fread(data, 1, sz, fp);
        return data;
    }
    
    static unsigned char *load_model(const char *filename, int *model_size)
    {
    
        FILE *fp;
        unsigned char *data;
    
        fp = fopen(filename, "rb");
        if (NULL == fp)
        {
            printf("Open file %s failed.\n", filename);
            return NULL;
        }
    
        fseek(fp, 0, SEEK_END);
        int size = ftell(fp);
    
        data = load_data(fp, 0, size);
    
        fclose(fp);
    
        *model_size = size;
        return data;
    }
    
    static unsigned char *load_repvgg_model(const char *filename, int *model_size)
    {
        FILE *fp = fopen(filename, "rb");
        if (fp == nullptr)
        {
            printf("fopen %s fail!\n", filename);
            return NULL;
        }
        fseek(fp, 0, SEEK_END);
        int model_len = ftell(fp);
        unsigned char *model = (unsigned char *)malloc(model_len);
        fseek(fp, 0, SEEK_SET);
        if (model_len != fread(model, 1, model_len, fp))
        {
            printf("fread %s fail!\n", filename);
            free(model);
            return NULL;
        }
        *model_size = model_len;
        if (fp)
        {
            fclose(fp);
        }
        return model;
    }
    
    
    void padding_resize(cv::InputArray src, cv::OutputArray dst, cv::Size size) {
            float padd_w = 0;
            float padd_h = 0;
            float r = std::min(float(size.width) / src.cols(), float(size.height) / src.rows());
            int inside_w = round(src.cols() * r);
            int inside_h = round(src.rows() * r);
            padd_w = size.width - inside_w;
            padd_h = size.height - inside_h;
            cv::resize(src, dst, cv::Size(inside_w, inside_h));
            padd_w = padd_w / 2;
            padd_h = padd_h / 2;
            //外层边框填充灰色
            printf("gain:%f, padd_w:%f,padd_h:%f. in padding_resize============\n", r, padd_w,padd_h);
            int top = int(round(padd_h - 0.1));
            int bottom = int(round(padd_h + 0.1));
            int left = int(round(padd_w - 0.1));
            int right = int(round(padd_w + 0.1));
            cv::copyMakeBorder(dst, dst, top, bottom, left, right, cv::BORDER_CONSTANT, cv::Scalar(114, 114, 114));
        }
    
    cv::Mat read_image(const char *image_path, int w, int h, int & img_width, int & img_height, cv::Mat & img) 
    {
        img = cv::imread(image_path);
        img_width = img.cols;
        img_height = img.rows;
        cv::Mat sample_resize, sample;
        padding_resize(img, sample_resize, cv::Size(w, h));
        //cv::resize(img, sample_resize, cv::Size(w, h));
        cv::cvtColor(sample_resize, sample, cv::COLOR_BGR2RGB);
        return sample;
    }
    
    /*-------------------------------------------
                      Main Functions
    -------------------------------------------*/
    int main(int argc, char **argv)
    {
        int status = 0;
        char *yolov5_model_name = NULL;
        rknn_context yolov5_ctx;
        unsigned int handle;
        size_t actual_size = 0;
        int yolov5_img_width = 0;
        int yolov5_img_height = 0;
        int img_channel = 0;
        const float nms_threshold = NMS_THRESH;
        const float box_conf_threshold = BOX_THRESH;
        struct timeval start_time, stop_time;
        int ret;
    
        rknn_context repvgg_ctx;
        int repvgg_model_len = 0;
        unsigned char *repvgg_model;
        int repvgg_img_width = 0;
        int repvgg_img_height = 0;
    
    
        //if (argc != 4)
        //{
            //printf("Usage: %s <rknn model> <bmp> \n", argv[0]);
            //return -1;
        //}
    
        printf("post process config: box_conf_threshold = %.2f, nms_threshold = %.2f\n",
               box_conf_threshold, nms_threshold);
    
        yolov5_model_name = (char *)argv[1];
        char *image_name = argv[2];
        string	address  = (argc > 3) ? string(argv[3]) : DFLT_SERVER_ADDRESS,
    	clientID = (argc > 4) ? string(argv[4]) : CLIENT_ID;
    
    #ifdef MQTT
        //mqtt init
        cout << "Initializing for server '" << address << "'..." << endl;
    	mqtt::async_client client(address, clientID, PERSIST_DIR);
    	callback cb;
    	client.set_callback(cb);
    	auto connOpts = mqtt::connect_options_builder()
    		.clean_session()
    		.will(mqtt::message(TOPIC, LWT_PAYLOAD, QOS))
    		.finalize();
    
        connOpts.set_keep_alive_interval(120 * 5);
    	connOpts.set_clean_session(true);
    	connOpts.set_automatic_reconnect(true);
        
        cout << "\nConnecting..." << endl;
        mqtt::token_ptr conntok = client.connect(connOpts);
        cout << "Waiting for the connection..." << endl;
        conntok->wait();
        cout << "  .connect..OK" << endl;
    #endif
    
        /* Create the yolov5 neural network */
        printf("===============================================Loading yolov5 mode=============================================\n");
        int model_data_size = 0;
        unsigned char *model_data = load_model(yolov5_model_name, &model_data_size);
        logchw
        ret = rknn_init(&yolov5_ctx, model_data, model_data_size, 0);
        if (ret < 0)
        {
            printf("rknn_init error ret=%d\n", ret);
            return -1;
        }
        logchw 
        rknn_sdk_version version;
        ret = rknn_query(yolov5_ctx, RKNN_QUERY_SDK_VERSION, &version,
                         sizeof(rknn_sdk_version));
        if (ret < 0)
        {
            printf("rknn_init error ret=%d\n", ret);
            return -1;
        }
        printf("sdk version: %s driver version: %s\n", version.api_version,
               version.drv_version);
    
    
        rknn_input_output_num  io_num;
        ret = rknn_query(yolov5_ctx, RKNN_QUERY_IN_OUT_NUM, &io_num, sizeof(io_num));
        if (ret < 0)
        {
            printf("rknn_init error ret=%d\n", ret);
            return -1;
        }
        printf("model input num: %d, output num: %d\n", io_num.n_input,
               io_num.n_output);
    
    
        rknn_tensor_attr input_attrs[io_num.n_input];
        memset(input_attrs, 0, sizeof(input_attrs));
        for (int i = 0; i < io_num.n_input; i++)
        {
            input_attrs[i].index = i;
            ret = rknn_query(yolov5_ctx, RKNN_QUERY_INPUT_ATTR, &(input_attrs[i]),
                             sizeof(rknn_tensor_attr));
            if (ret < 0)
            {
                printf("rknn_init error ret=%d\n", ret);
                return -1;
            }
            dump_tensor_attr(&(input_attrs[i]));
        }
    
        rknn_tensor_attr output_attrs[io_num.n_output];
        memset(output_attrs, 0, sizeof(output_attrs));
        for (int i = 0; i < io_num.n_output; i++)
        {
            output_attrs[i].index = i;
            ret = rknn_query(yolov5_ctx, RKNN_QUERY_OUTPUT_ATTR, &(output_attrs[i]),
                             sizeof(rknn_tensor_attr));
            dump_tensor_attr(&(output_attrs[i]));
            if(output_attrs[i].qnt_type != RKNN_TENSOR_QNT_AFFINE_ASYMMETRIC || output_attrs[i].type != RKNN_TENSOR_UINT8)
            {
                fprintf(stderr,"The Demo required for a Affine asymmetric u8 quantized rknn model, but output quant type is %s, output data type is %s\n", 
                        get_qnt_type_string(output_attrs[i].qnt_type),get_type_string(output_attrs[i].type));
                return -1;
            }
        }
    
        int channel = 3;
        int yolov5_width = 0;
        int yolov5_height = 0;
        if (input_attrs[0].fmt == RKNN_TENSOR_NCHW)//这种格式时,排列是WHCN,我们的模型时这种格式。
        {
            printf("model is NCHW input fmt\n");
            yolov5_width = input_attrs[0].dims[0];
            yolov5_height = input_attrs[0].dims[1];
        }
        else//RKNN_TENSOR_NHWC,这种格式时排列是CWHN。
        {
            printf("model is NHWC input fmt\n");
            yolov5_width = input_attrs[0].dims[1];
            yolov5_height = input_attrs[0].dims[2];
        }
    
        printf("model input yolov5_height=%d, yolov5_width=%d, channel=%d\n", yolov5_height, yolov5_width, channel);
    
        rknn_input yolov5_inputs[1];
        memset(yolov5_inputs, 0, sizeof(yolov5_inputs));
        yolov5_inputs[0].index = 0;
        yolov5_inputs[0].type = RKNN_TENSOR_UINT8;
        yolov5_inputs[0].size = yolov5_width * yolov5_height * channel;
        yolov5_inputs[0].fmt = RKNN_TENSOR_NHWC;    //RKNN_TENSOR_NHWC;demo代码原来是NHWC,你改成了NCHW,
        yolov5_inputs[0].pass_through = 0;
    
    
        cv::Mat img;
        cv::Mat sample = read_image(image_name, yolov5_width, yolov5_height,yolov5_img_width,yolov5_img_height, img);
        yolov5_inputs[0].buf = sample.data;
        for(int i =0; i< sample.rows ;i++)
        {
            for(int j = 0; j < sample.cols; j++)
            {
                //std::cout<<"sample0:::"<<(double)sample.at<Vec3f>(i,j)[0];
                //std::cout<<"sample1:::"<<(double)sample.at<Vec3f>(i,j)[1];
                //std::cout<<"sample2::"<<(double)sample.at<Vec3f>(i,j)[2];
            }
        }
        
        printf("yolov5_img_width:%d\n, yolov5_img_height:%d\n", yolov5_img_width,yolov5_img_height);
    
    
        gettimeofday(&start_time, NULL);
        rknn_inputs_set(yolov5_ctx, io_num.n_input, yolov5_inputs);//包含了颜色通道交换、归一化、量化、NHWC 转换成 NCHW 的过程
    
        rknn_output yolov5_outputs[io_num.n_output];
        memset(yolov5_outputs, 0, sizeof(yolov5_outputs));
        for (int i = 0; i < io_num.n_output; i++)
        {
            yolov5_outputs[i].want_float = 0;
        }
        ret = rknn_run(yolov5_ctx, NULL);
        ret = rknn_outputs_get(yolov5_ctx, io_num.n_output, yolov5_outputs, NULL);
    
    
        //post process
        float scale_w = (float)yolov5_width / yolov5_img_width;
        float scale_h = (float)yolov5_height / yolov5_img_height;
    
        detect_result_group_t detect_result_group;
        std::vector<float> out_scales;
        std::vector<uint32_t> out_zps;
        for (int i = 0; i < io_num.n_output; ++i)
        {
            out_scales.push_back(output_attrs[i].scale);
            out_zps.push_back(output_attrs[i].zp);
            cout<<"out_scales["<<i<<"]:"<<output_attrs[i].scale<<endl;
            cout<<"out_zps["<<i<<"]:"<<output_attrs[i].zp<<endl;
        }
        
    
        printf("scale_w::::::::::%f,scale_h::::::%f\n", scale_w, scale_h);
        post_process((uint8_t *)yolov5_outputs[0].buf, (uint8_t *)yolov5_outputs[1].buf, (uint8_t *)yolov5_outputs[2].buf, yolov5_height, yolov5_width,
                    box_conf_threshold, nms_threshold, scale_w, scale_h, out_zps, out_scales, &detect_result_group);
        
        gettimeofday(&stop_time, NULL);
        printf("once run use %f ms\n",
            (__get_us(stop_time) - __get_us(start_time)) / 1000);
    
    
        // Draw Objects
        char text[256];
        char resName[256];
        char saveName[256];
        const unsigned char blue[] = {0, 0, 255};
        const unsigned char white[] = {255, 255, 255};
    
        printf("detect_result_group.count::::%d\n", detect_result_group.count);
        printf("yolov5_width:%d, yolov5_img_width:%d, yolov5_height:%d, yolov5_img_height:%d, after process\n",
         yolov5_width, yolov5_img_width, yolov5_height, yolov5_img_height);
        //float gain = std::min(float(width) / img_width, float(height) / img_height);
        float gain = scale_w;
        if( scale_h < scale_w )
        {
            gain = scale_h;
        }
        printf("gain:%f\n", gain);
        float pad0 = (yolov5_width - yolov5_img_width * gain) / 2;
        float pad1 = (yolov5_height - yolov5_img_height * gain) / 2;
        
    
        std::vector<cv::Mat> detRetImg = {};
        std::vector<cv::Mat> classfiInputImg = {};
        for (int i = 0; i < detect_result_group.count; i++)
        {
            detect_result_t *det_result = &(detect_result_group.results[i]);
            sprintf(text, "%s %.2f", det_result->name, det_result->prop);
            printf("%s @ (%d %d %d %d) %f\n",
                det_result->name,
                det_result->box.left, det_result->box.top, det_result->box.right, det_result->box.bottom,
                det_result->prop);
            int x1 = det_result->box.left;
            int y1 = det_result->box.top;
            int x2 = det_result->box.right;
            int y2 = det_result->box.bottom;
            
            x1 -= pad0;
            y1 -= pad1;
            x2 -= pad0;
            y2 -= pad1;
    
            x1 /= gain;
            y1 /= gain;
            x2 /= gain;
            y2 /= gain;
    
            int landmark_x1 = (det_result->landmark[0] - pad0)/gain;
            int landmark_y1 = (det_result->landmark[1] - pad1)/gain;
            int landmark_x2 = (det_result->landmark[2] - pad0)/gain;
            int landmark_y2 = (det_result->landmark[3] - pad1)/gain;
            int landmark_x3 = (det_result->landmark[4] - pad0)/gain;
            int landmark_y3 = (det_result->landmark[5] - pad1)/gain;
            int landmark_x4 = (det_result->landmark[6] - pad0)/gain;
            int landmark_y4 = (det_result->landmark[7] - pad1)/gain;
            int landmark_x5 = (det_result->landmark[8] - pad0)/gain;
            int landmark_y5 = (det_result->landmark[9] - pad1)/gain;
    
    
            printf("det_result->box.left:%d\n, det_result->box.top:%d,det_result->box.right:%d,det_result->box.bottom:%d\n",
            det_result->box.left,det_result->box.top,det_result->box.right,det_result->box.bottom);
            printf("pad0:%f, pad1:%f, gain:%f\n",pad0, pad1, gain);
            printf("x1:%d, y1:%d,x2:%d,y2:%d\n",x1, y1, x2, y2);
    
            cv::rectangle(img, cv::Point(x1, y1), cv::Point(x2, y2), CV_RGB(0, 255, 0));
            circle(img, cv::Point(landmark_x1, landmark_y1), 3,Scalar(255,0,0),-1);
            circle(img, cv::Point(landmark_x2, landmark_y2), 3,Scalar(255,0,0),-1);
            circle(img, cv::Point(landmark_x3, landmark_y3), 3,Scalar(255,0,0),-1);
            circle(img, cv::Point(landmark_x4, landmark_y4), 3,Scalar(255,0,0),-1);
            circle(img, cv::Point(landmark_x5, landmark_y5), 3,Scalar(0,255,0),-1);
            std::cout<<"landmark::====="<<landmark_x1<<":"<<landmark_y1<<std::endl;
            std::cout<<"landmark::====="<<landmark_x2<<":"<<landmark_y2<<std::endl;
            std::cout<<"landmark::====="<<landmark_x3<<":"<<landmark_y3<<std::endl;
            std::cout<<"landmark::====="<<landmark_x4<<":"<<landmark_y4<<std::endl;
            std::cout<<"landmark::====="<<landmark_x5<<":"<<landmark_y5<<std::endl;
    
            putText(img,text,Point(x1,y1-12),FONT_HERSHEY_SIMPLEX,1,Scalar(255,23,0),2,8);
        }
        cv::imwrite("./out.jpg", img);
    
        ret = rknn_outputs_release(yolov5_ctx, io_num.n_output, yolov5_outputs);
    
    
    
    #ifdef MQTT  
        // Now try with a listener
        cout << "\nSending algorithm result ..." << endl;
        action_listener listener;
        mqtt::message_ptr pubmsg = mqtt::make_message(TOPIC, repvggResult.c_str());
        mqtt::delivery_token_ptr pubtok = client.publish(pubmsg, nullptr, listener);
        pubtok->wait();
        cout << "  ...OK" << endl;
    #endif
        
    
    #if 0
        //yolov5 loop test
        for(int i = 0; i < 5; i++)
        {
            cv::Mat img;
            cv::Mat sample = read_image(image_name, yolov5_width, yolov5_height,yolov5_img_width,yolov5_img_height, img);
            yolov5_inputs[0].buf = sample.data;
            printf("yolov5_img_width:%d\n, yolov5_img_height:%d\n", yolov5_img_width,yolov5_img_height);
    
            gettimeofday(&start_time, NULL);
            rknn_inputs_set(yolov5_ctx, io_num.n_input, yolov5_inputs);//包含了颜色通道交换、归一化、量化、NHWC 转换成 NCHW 的过程。
    
            rknn_output yolov5_outputs[io_num.n_output];
            memset(yolov5_outputs, 0, sizeof(yolov5_outputs));
            for (int i = 0; i < io_num.n_output; i++)
            {
                yolov5_outputs[i].want_float = 0;
            }
            ret = rknn_run(yolov5_ctx, NULL);
            ret = rknn_outputs_get(yolov5_ctx, io_num.n_output, yolov5_outputs, NULL);
    
            //post process
            float scale_w = (float)yolov5_width / yolov5_img_width;
            float scale_h = (float)yolov5_height / yolov5_img_height;
    
            detect_result_group_t detect_result_group;
            std::vector<float> out_scales;
            std::vector<uint32_t> out_zps;
            for (int i = 0; i < io_num.n_output; ++i)
            {
                out_scales.push_back(output_attrs[i].scale);
                out_zps.push_back(output_attrs[i].zp);
            }
            
    
            printf("scale_w::::::::::%f,scale_h::::::%f\n", scale_w, scale_h);
            post_process((uint8_t *)yolov5_outputs[0].buf, (uint8_t *)yolov5_outputs[1].buf, (uint8_t *)yolov5_outputs[2].buf, yolov5_height, yolov5_width,
                        box_conf_threshold, nms_threshold, scale_w, scale_h, out_zps, out_scales, &detect_result_group);
            
            ret = rknn_outputs_release(yolov5_ctx, io_num.n_output, yolov5_outputs);
            gettimeofday(&stop_time, NULL);
            printf("once run use %f ms\n",
                (__get_us(stop_time) - __get_us(start_time)) / 1000);
        }
    #endif
    
        // release
        ret = rknn_destroy(yolov5_ctx);
        if (model_data)
        {
            free(model_data);
        }
    
        return 0;
    }
    

    参考文献:

    yolov5输出后处理_C++实现

    ubuntu交叉编译RV1126的opencv库/ubuntu交叉编译opencv

    opencv实现padding resize

    瑞芯微RV1126的YOLOv5官方推理demo

    https://github.com/rockchip-linux/rknn-toolkit/issues/93

    展开全文
  • 【解析题】在机器学习的背景上,人工智能区别于传统计算机的地方就在于人工智能能自主进行(),来揭示数据的内在价值【解析题】计算机网络【解析题】计算机启动1短报警,请问是以下哪种情况 (...
  • 汉字机内码是计算机内部存储、处理加工和传输汉字时所有的由0和1符号组成的代码。输入的汉字外码到机器内部需要转换成机内码,这样才能被存储和进行各种处理。计算机内处理汉字信息时所用的汉字代码是汉字机内码。...
  • 【回答】:等级保护对象是等级保护工作中的对象,通常是由计算机或者其他信息终端及相关设备组成的按照一定的规则和程序对信息进行收集、存储、传输、传输、交换、处理的系统,主要包括基础信息网络、云计算平台...
  • try块通常应有一个catch 块,用来处理try块中抛出的异常。滴定起点pH值提高,滴定突跃范围变大答:×中国大学MOOC: 声环境功能分区中,I类声环境功能区不包括()答:康复疗养区偏压构件的抗弯承载力( )答:大偏压...
  • 1、通常我们所说的32位机,的是这种计算机的CPU 。A.能够同时处理32位二进制数据B.是由32个运算器组成的C.一共有32个运算器和控制器D.包含有32个寄存器2、关于微型机主板上的总线,下列说法错误的是A.总线...
  • 你知道程序都是怎么处理时区问题的么?

    万次阅读 多人点赞 2021-01-20 21:59:25
    +---------------------+ | time | +---------------------+ | 2021-01-07 10:00:00 | 我们要解决的问题是:MySQL设置time_zone='CET’是否能自动实现DST转换,如果可以的话,那么用户端、前端服务、后端服务以及...
  • 【转载】DC综合后处理 - Thomas的文章 - 知乎,作者:IC_learner, https://zhuanlan.zhihu.com/p/161061612 概述 前面也讲了一些综合后的需要进行的一些工作,这里就集中讲一下DC完成综合了,产生了一些文件,我们...
  • 通常所说的主机什么

    千次阅读 2021-06-18 03:13:59
    通常说的主机是CPU和内存。主机是计算机除去输入输出设备以外的主要机体部分。也是用于放置主板及其他主要部件的控制箱体(容器Mainframe)。通常包括CPU、内存、硬盘、光驱、电源、以及其他输入输出控制器和接口...
  • 数据标准化是当数据 x 按均值 μ 中心化,再按标准差 σ 缩放,数据就会服从均值为 0,方差为 1 的标准正态分布,这个过程就叫做数据标准化。 数据标准化的公式如下: 有一点需要注意:标准化并不会改变数据的...
  • 计算机中rom的是内存还是外存

    千次阅读 2021-07-26 00:41:42
    2021-02-25 14:49:19来源:亿速云阅读:93作者:小新这篇文章将为大家详细讲解有关计算机中rom的是内存还是外存,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章可以有所收获。rom是内存,...
  • 字典树---Python自然语言处理(3)

    千次阅读 2021-03-18 12:31:29
    在自然语言处理中,字符串集合常用字典树存储,这是一种字符串上的树形数据结构。字典树中每条边都对应一个字,从根节点往下的路径构成一个个字符串。 字典树并不直接在节点上存储字符串,而是将词语视作根节点到某...
  • 但是,关于在使用协程的时候,个人觉得异常处理这一块是相对来讲是需要花时间去了解的地方,因为在使用过程中还是会遇到一些小坑的,这里记录下之前遇到的坑。 踩一个使用async await时异常处理的坑 kotlin 协程的...
  • 什么是OLAP?

    千次阅读 2021-08-26 14:54:36
    OLAP 是在线分析处理,顾名思义就是OLAP是用于数据分析的;因此,它使我们能够同时分析来自多个数据库系统的信息。换句话说,我们可以说它是一种计算方法,可以让用户轻松提取所需的数据并查询数据,以便从不同的...
  • 切分算法---Python自然语言处理(2)

    千次阅读 2021-03-17 18:05:07
    完全切分是,找出一段文本中的所有单词。 不考虑效率的话,完全切分算法其实非常简单。只要遍历文本中的连续序列,查询该序列是否在词典中即可。上一篇我们获取了词典的所有词语dic,这里我们直接用代
  • 此时,在执行三条指令的过程中,取操作对指令1执行完取指后,马上对指令2进行取,然后又马上对指令3进行取;分析操作同样是对指令1执行完分析,马上对指令2进行分析,然后又马上对指令3进行分析;执行操作...
  • 多媒体技术的媒体的是(者)。通常“媒体”可分为以下五种类型:(感觉)媒体、(表示 )媒体、(显示)媒体、(存储)媒体和(传输 )媒体“多媒体”是能够同时(获取)、(处理)、(编辑)、(存储)和(展示)两个以上不同类型...
  • 中断和中断处理流程

    千次阅读 2020-12-20 04:41:20
    1. 中断概念中断是由于接收到来自外围硬件(相对于中央处理器和内存)的异步信号或来自软件的同步信号,而进行相应的硬件/软件处理。发出这样的信号称为进行中断请求(interrupt request,IRQ)。硬件中断导致处理器...
  • 面试官:tomcat是如何处理http请求的?

    千次阅读 多人点赞 2021-01-17 10:09:29
    Host容器是Engine容器的子容器,上面也说到Host是受Engine容器管理的,就是一个虚拟主机,比如我们在访问具体jsp页面URL中localhost就是一个虚拟主机,其作用是运行多个应用,并对这些应用进行管理,其子容器是...
  • 大学计算机基础选择题汇总

    千次阅读 2021-06-25 08:45:20
    C) 只能顺序读写数据,断电数据将部分丢失 D) 只能顺序读写数据,且断电数据将全部丢失 【91】 在微型计算机中,运算器和控制器合称为 A) 逻辑部件 B) 算术运算部件 C) 微处理器 D) 算术和逻辑部件 【92】在微型...
  • 在冯诺依曼计算机模型中存储器是“内存”单元。冯诺依曼计算机模型中采用存储程序方式,指令和数据不加区别混合存储在同一个存储器中,数据和程序在内存中是没有区别的,它们都是内存中的数据。美籍匈牙利数学家冯...
  • 数字图像处理课程设计

    千次阅读 2021-10-30 18:44:44
    《数字图像处理》是一门应用型课程,为了更巩固和掌握图像处理技术的基本技能,提高实际动手能力,并通过实际编程了解图像处理算法实现的基本原理,为今后应用图像处理技术和编程技术解决实际问题奠定基础,设计一个...
  • 遥感图像处理

    千次阅读 2021-02-01 14:52:16
    遥感图像处理 遥感图像处理是对遥感图像进行辐射校正和几何纠正、图像整饰、投影变换、镶嵌、特征提取、分类以及各种专题处理等一系列操作,以求达到预期目的的技术。 遥感图像处理可分为两类:一是利用光学、照相...
  • 得出实验结果图像,分析去噪的具体原因。并尝试使用1×3 1×11 1×21模板去处理图像,并分析不同效果的原因。 三、实验环境 本实验在Windows平台上进行,对内存及cpu主频无特别要求,使用MINGW(gcc)编译器。 ...
  • 计算机术语POST是的什么意思?

    千次阅读 2021-06-25 11:14:24
    这一点与Send()函数不同,Send()函数直接触发指定窗口相应的事件,执行事件处理程序返回到调用应用中。因此,我们说Post()函数采用的是异步方式,Send()函数采用的是同步方式。Post()函数的参数handle指定接收消息...
  • 语音信号基本知识和处理

    千次阅读 2021-04-27 20:20:14
    我们需要将其保存为数字信号再进行处理。 1. 声音三要素 1.1 音调 人耳对声音高低的感觉称为音调。音调主要与声波的频率有关。声波的频率高,则音调也高。 人耳听觉音频范围是20Hz-20000Hz 1.2 音量 人耳对...
  • 搜索引擎的常用使用技巧

    千次阅读 2021-07-17 04:49:54
    搜索引擎的常用使用技巧搜索引擎是根据一定的策略、运用特定的计算机程序从互联网上搜集信息, 在对信息进行组织和处理后, 为用户提供检索服务, 将用户检索相关的信息展示给用户的系统。目前笔者使用比较习惯的搜索...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,537,784
精华内容 615,113
关键字:

后处理是指