精华内容
下载资源
问答
  • 反光检测

    2021-01-13 20:55:50
    Automatic segmentation and inpainting of specular highlights for endoscopic imaging 论文反光检测部分pythonn实现 代码写的很乱,可直接运行,后期有时间在整理整理,写写注释 import cv2 import numpy as np ...

    Automatic segmentation and inpainting of specular highlights for endoscopic imaging 论文反光检测部分pythonn实现

    代码写的很乱,可直接运行,后期有时间在整理整理,写写注释

    import cv2
    import numpy as np
    import matplotlib.pyplot as plt
    import scipy
    
    from skimage import measure, filters, data
    
    img = cv2.imread('../data/fazhi/95.jpg')
    w = img.shape[0]
    h = img.shape[1]
    
    # model1
    cR = img[:, :, 0]
    cG = img[:, :, 1]
    cB = img[:, :, 2]
    
    cE = 0.2989 * cR + 0.5870 * cG + 0.1140 * cB
    
    
    def calc_module1_specular_mask(cE, cG, cB, T1):
        p95_cG = cG * 0.95
        p95_cE = cE * 0.95
        p95_cB = cB * 0.95
        # p95_cG = prctile(cG(:), 95)
        # p95_cB = prctile(cB(:), 95)
        # p95_cE = prctile(cE(:), 95)
        rGE = p95_cG / p95_cE
        rBE = p95_cB / p95_cE
        img_new = np.zeros((w, h, 1))
        for i in range(0, w):
            for j in range(0, h):
    
                if all([(cG[i][j] > rGE[i][j] * T1), (cB[i][j] > rBE[i][j] * T1), (cE[i][j] > T1)]):
                    img_new[i][j] = 255
                else:
                    img_new[i][j] = 0
    
        return img_new
    
    
    module1_specular_mask = calc_module1_specular_mask(cE, cG, cB, T1=240)
    cv2.imwrite('../data/New/95-model1.jpg',module1_specular_mask)
    
    
    # cv2.imshow('s',img)
    
    
    def calc_centroid_color_info(specular_mask_T2_abs, img):
        kernel = np.ones((2, 2), np.uint8)
        dilated_mask_1 = cv2.erode(specular_mask_T2_abs, kernel, iterations=1)
        kernel = np.ones((4, 4), np.uint8)
        dilated_mask_2 = cv2.erode(specular_mask_T2_abs, kernel, iterations=1)
        centroid_color_area = dilated_mask_2 - dilated_mask_1
        labeled_area = measure.label(centroid_color_area)  # 连通区域标记
        num_region = max(labeled_area.reshape(labeled_area.shape[0] * labeled_area.shape[1]))
        centroid_color_info = []
        for i in range(1, num_region):
            dict = {}
            [row_index, col_index] = np.where(labeled_area == i)
            num_possible_specular_points = len(row_index)
            dict['centroid_row'] = np.mean(row_index)
            dict['centroid_col'] = np.mean(col_index)
            dict['centroid_color'] = np.mean(img[row_index, col_index, :])  ## 不明确
            centroid_color_info.append(dict)
    
        return centroid_color_info
    
    
    def calc_distance(x, y, x1, y1):
        distance_to_centroid = np.sqrt((x - x1) ** 2 + (y - y1) ** 2)
        return distance_to_centroid
    
    
    def find_the_nearest_region(centroid_color_info, pixel_row, pixel_col):
        num_region = len(centroid_color_info)
        nearset_region_index = 0
        nearset_distance = 1e6
        for j in range(1, num_region):
            distance_to_centroid = calc_distance(pixel_row, pixel_col, centroid_color_info[j]['centroid_row'],
                                                 centroid_color_info[j]['centroid_col'])
            if distance_to_centroid < nearset_distance:
                nearset_distance = distance_to_centroid
                nearset_region_index = j
    
        nearest_region = centroid_color_info[nearset_region_index]
        return nearest_region
    
    
    def filling_image_using_centroid_color(specular_mask_T2_abs, img):
        # filling possible specular highlights by the centroid color
        centroid_color_info = calc_centroid_color_info(specular_mask_T2_abs, img)
        specular_mask_T2_abss = specular_mask_T2_abs.reshape(specular_mask_T2_abs.shape[0], specular_mask_T2_abs.shape[1])
        [row_index, col_index] = np.where(specular_mask_T2_abss == 255)
        num_possible_specular_points = len(row_index)
        filled_img = img
        for i in range(1, num_possible_specular_points):
            # looking for the nearst centroid color for every specular point
            # and fill it
            nearest_region = find_the_nearest_region(centroid_color_info, row_index[i], col_index[i])
            filled_img[row_index[i], col_index[i], :] = nearest_region['centroid_color']
        return filled_img
    
    
    def contrast_coeffcient(c):
        mean_c = np.mean(c);
        std_c = np.std(c);
        t = 1 / ((mean_c + std_c) / mean_c)
        return t
    
    
    def calc_modul2_specular_mask(filled_img, T2_rel, cR, cG, cB):
        R = filled_img[:, :, 0]
        fR = cv2.medianBlur(R, 31)
        fG = cv2.medianBlur(filled_img[:, :, 1], 31)
        fB = cv2.medianBlur(filled_img[:, :, 2], 31)
        filtered_img = np.stack((fR, fG, fB), axis=2)
    
        for i in range(filled_img.shape[0]):
            for j in range(filled_img.shape[1]):
                if (fR[i][j] < 2.2204e-16):
                    fR[i][j] = 1e7
                if (fG[i][j] < 2.2204e-16):
                    fG[i][j] = 1e7
                if (fB[i][j] < 2.2204e-16):
                    fB[i][j] = 1e7
        tR = contrast_coeffcient(cR)
        tG = contrast_coeffcient(cG)
        tB = contrast_coeffcient(cB)
    
        max_img = np.stack(((tR * cR / fR), (tG * cG / fG), (tB * cB / fB)), axis=2)
        e_max = np.amax(max_img, 2)
        module2_specular_mask = e_max > T2_rel
        return module2_specular_mask
    
        # fR(fR <  2.2204e-16) = 1e7
        # fG(fG <  2.2204e-16) = 1e7
        # fB(fB <  2.2204e-16) = 1e7
    
    
    # model2
    specular_mask_T2_abs = calc_module1_specular_mask(cE, cG, cB, T1=190)
    
    filled_img = filling_image_using_centroid_color(specular_mask_T2_abs, img)
    plt.imshow(filled_img)
    plt.show()
    module2_specular_mask = calc_modul2_specular_mask(filled_img, T2_rel=1.2, cR=cR, cG=cG, cB=cB)
    # module2_specular_mask = np.array(module2_specular_mask)
    
    final_mask = np.zeros((w, h, 1))
    for i in range(0, w):
        for j in range(0, h):
            if module2_specular_mask[i][j] == True or module1_specular_mask[i][j] == 255:
                final_mask[i][j][0] = 255
            else:
                final_mask[i][j][0] = 0
    
    N_min = 5000
    T3 = 5
    
    
    def postprocessing(final_mask, cE, N_min, T3):
        kernel = np.ones((3, 3), np.uint8)
        final_mask = cv2.erode(final_mask, kernel, iterations=1)
        labeled_area = measure.label(final_mask)
        num_region = np.max(labeled_area[:])
        post_specular_mask = final_mask
        for i in range(1, num_region):
            index = np.where(labeled_area == i)
            if (len(index) >= N_min):
                post_specular_mask[index] = 0
    
        return post_specular_mask
    
    
    mask = postprocessing(final_mask, cE, N_min=3000, T3=5)
    
    mg_gray = cv2.imread('../data/fazhi/95.jpg', cv2.IMREAD_GRAYSCALE)
    # 利用图像中要提取的目标区域与其背景在灰度特性上的差异,把图像看作具有不同灰度级的两类区域(目标区域和背景区域)的组合,选取一个比较合理的阈值
    thresh = filters.threshold_otsu(mg_gray)
    
    # ret,thresh = cv2.threshold(img,cv2.THRESH_BINARY)
    # 根据阈值分割
    TTTT = np.zeros((w, h))
    dst = (mg_gray >= thresh) * 255.0
    
    for i in range(0, w):
        for j in range(0, h):
            if mask[i][j] > 0 and dst[i][j] > 0:
                TTTT[i][j] = 255
    
    image2 = np.concatenate([TTTT, mask, dst], axis=1)
    #cv2.imwrite(r"D:\code dp\YXTX\data\New\TTTT2.jpg", TTTT)
    plt.set_cmap("binary")
    # plt.imshow(TTTT)
    # plt.imshow(mask)
    # plt.imshow(dst)
    plt.imshow(image2)
    
    plt.show()
    cv2.imwrite('../data/New/95_mask.jpg',TTTT)
    cv2.imwrite('../data/New/95_mask-image2.jpg',image2)
    
    # cv2.imshow('d', module2_specular_mask)
    # cv2.waitKey()
    # module2_specular_mask = module2_specular_mask*255
    # cv2.imshow('6',filled_img)
    # plt.imshow(module2_specular_mask)
    # #plt.set_camp('binary')
    # plt.show()
    
    
    ### inpainting
    
    decay_win_size = 10
    decay_cof = 20
    
    
    def InpainttingArnold2010(mask, img, decay_win_size, decay_cof):
        filled_img = filling_image_using_centroid_color(mask, img)
        cv2.imwrite('../data/New/95filled.jpg', filled_img)
        # plt.set_cmap("binary")
        # plt.imshow(filled_img)
        # plt.show()
        sig = 8
        gaussian_filtered_img = cv2.GaussianBlur(filled_img, (3, 3), sig, sig)
        cv2.imshow('s',gaussian_filtered_img)
    
        #mx = imfilter(double(specular_mask), ones(decay_win_size)/decay_cof)
        mx = scipy.ndimage.filters.convolve(mask, np.ones((decay_win_size,decay_win_size))/decay_cof, mode='nearest')
        cv2.imshow('mx',mx)
        cv2.waitKey()
        mx = mx + mask
        mx = (mx > 1) *1
        mx = np.array([mx])
        mx = mx.reshape(w,h,1)
        inpainted_img = mx * (gaussian_filtered_img) + (1 - mx) * img
        return inpainted_img
    
    
    inpainted_img = InpainttingArnold2010(TTTT, img, decay_win_size, decay_cof)
    #plt.imshow(inpainted_img)
    cv2.imwrite('../data/New/95inpaint.jpg',inpainted_img)
    plt.show()
    
    

    自己添加一个固定阈值后做并集

    import cv2
    import numpy as np
    import matplotlib.pyplot as plt
    import scipy
    
    from skimage import measure, filters, data
    
    img = cv2.imread('../data/fazhi/8.jpg')
    w = img.shape[0]
    h = img.shape[1]
    
    # model1
    cR = img[:, :, 0]
    cG = img[:, :, 1]
    cB = img[:, :, 2]
    
    cE = 0.2989 * cR + 0.5870 * cG + 0.1140 * cB
    
    
    def calc_module1_specular_mask(cE, cG, cB, T1):
        p95_cG = cG * 0.95
        p95_cE = cE * 0.95
        p95_cB = cB * 0.95
        # p95_cG = prctile(cG(:), 95)
        # p95_cB = prctile(cB(:), 95)
        # p95_cE = prctile(cE(:), 95)
        rGE = p95_cG / p95_cE
        rBE = p95_cB / p95_cE
        img_new = np.zeros((w, h, 1))
        for i in range(0, w):
            for j in range(0, h):
    
                if all([(cG[i][j] > rGE[i][j] * T1), (cB[i][j] > rBE[i][j] * T1), (cE[i][j] > T1)]):
                    img_new[i][j] = 255
                else:
                    img_new[i][j] = 0
    
        return img_new
    
    
    module1_specular_mask = calc_module1_specular_mask(cE, cG, cB, T1=240)
    cv2.imwrite('../data/New/8-model1.jpg',module1_specular_mask)
    
    
    # cv2.imshow('s',img)
    
    
    def calc_centroid_color_info(specular_mask_T2_abs, img):
        kernel = np.ones((4, 4), np.uint8)
        dilated_mask_1 = cv2.erode(specular_mask_T2_abs, kernel, iterations=1)
        kernel = np.ones((6, 6), np.uint8)
        dilated_mask_2 = cv2.erode(specular_mask_T2_abs, kernel, iterations=1)
        centroid_color_area = dilated_mask_2 - dilated_mask_1
        labeled_area = measure.label(centroid_color_area)  # 连通区域标记
        num_region = np.max(labeled_area.reshape(labeled_area.shape[0] * labeled_area.shape[1]))
        centroid_color_info = []
        for i in range(1, num_region):
            dict = {}
            [row_index, col_index] = np.where(labeled_area == i)
            num_possible_specular_points = len(row_index)
            dict['centroid_row'] = np.mean(row_index)
            dict['centroid_col'] = np.mean(col_index)
            ii = img[row_index, col_index, :]
            dict['centroid_color'] = np.mean(ii,axis=0)  ## 不明确
            centroid_color_info.append(dict)
    
        return centroid_color_info
    
    
    def calc_distance(x, y, x1, y1):
        distance_to_centroid = np.sqrt((x - x1) ** 2 + (y - y1) ** 2)
        return distance_to_centroid
    
    
    def find_the_nearest_region(centroid_color_info, pixel_row, pixel_col):
        num_region = len(centroid_color_info)
        nearset_region_index = 0
        nearset_distance = 1e6
        for j in range(1, num_region):
            distance_to_centroid = calc_distance(pixel_row, pixel_col, centroid_color_info[j]['centroid_row'],
                                                 centroid_color_info[j]['centroid_col'])
            if distance_to_centroid < nearset_distance:
                nearset_distance = distance_to_centroid
                nearset_region_index = j
    
        nearest_region = centroid_color_info[nearset_region_index]
        return nearest_region
    
    
    def filling_image_using_centroid_color(specular_mask_T2_abs, img):
        # filling possible specular highlights by the centroid color
        centroid_color_info = calc_centroid_color_info(specular_mask_T2_abs, img)
        specular_mask_T2_abss = specular_mask_T2_abs.reshape(specular_mask_T2_abs.shape[0], specular_mask_T2_abs.shape[1])
        [row_index, col_index] = np.where(specular_mask_T2_abss > 0)
        num_possible_specular_points = len(row_index)
        filled_img = img
        for i in range(1, num_possible_specular_points):
            # looking for the nearst centroid color for every specular point
            # and fill it
            nearest_region = find_the_nearest_region(centroid_color_info, row_index[i], col_index[i])
            filled_img[row_index[i], col_index[i], :] = nearest_region['centroid_color']
        return filled_img
    
    
    def contrast_coeffcient(c):
        mean_c = np.mean(c);
        std_c = np.std(c);
        t = 1 / ((mean_c + std_c) / mean_c)
        return t
    
    
    def calc_modul2_specular_mask(filled_img, T2_rel, cR, cG, cB):
        R = filled_img[:, :, 0]
        fR = cv2.medianBlur(R, 31)
        fG = cv2.medianBlur(filled_img[:, :, 1], 31)
        fB = cv2.medianBlur(filled_img[:, :, 2], 31)
        filtered_img = np.stack((fR, fG, fB), axis=2)
    
        for i in range(filled_img.shape[0]):
            for j in range(filled_img.shape[1]):
                if (fR[i][j] < 2.2204e-16):
                    fR[i][j] = 1e7
                if (fG[i][j] < 2.2204e-16):
                    fG[i][j] = 1e7
                if (fB[i][j] < 2.2204e-16):
                    fB[i][j] = 1e7
        tR = contrast_coeffcient(cR)
        tG = contrast_coeffcient(cG)
        tB = contrast_coeffcient(cB)
    
        max_img = np.stack(((tR * cR / fR), (tG * cG / fG), (tB * cB / fB)), axis=2)
        e_max = np.amax(max_img, 2)
        module2_specular_mask = e_max > T2_rel
        return module2_specular_mask
    
        # fR(fR <  2.2204e-16) = 1e7
        # fG(fG <  2.2204e-16) = 1e7
        # fB(fB <  2.2204e-16) = 1e7
    
    
    # model2
    specular_mask_T2_abs = calc_module1_specular_mask(cE, cG, cB, T1=190)
    
    filled_img = filling_image_using_centroid_color(specular_mask_T2_abs, img)
    #plt.imshow(filled_img)
    plt.show()
    module2_specular_mask = calc_modul2_specular_mask(filled_img, T2_rel=1.2, cR=cR, cG=cG, cB=cB)
    # module2_specular_mask = np.array(module2_specular_mask)
    
    final_mask = np.zeros((w, h, 1))
    for i in range(0, w):
        for j in range(0, h):
            if module2_specular_mask[i][j] == True or module1_specular_mask[i][j] == 255:
                final_mask[i][j][0] = 255
            else:
                final_mask[i][j][0] = 0
    
    N_min = 5000
    T3 = 5
    
    
    def postprocessing(final_mask, cE, N_min, T3):
        kernel = np.ones((3, 3), np.uint8)
        final_mask = cv2.erode(final_mask, kernel, iterations=1)
        labeled_area = measure.label(final_mask)
        num_region = np.max(labeled_area[:])
        post_specular_mask = final_mask
        for i in range(1, num_region):
            index = np.where(labeled_area == i)
            if (len(index) >= N_min):
                post_specular_mask[index] = 0
    
        return post_specular_mask
    
    
    mask = postprocessing(final_mask, cE, N_min=3000, T3=5)
    
    mg_gray = cv2.imread('../data/fazhi/8.jpg', cv2.IMREAD_GRAYSCALE)
    # 利用图像中要提取的目标区域与其背景在灰度特性上的差异,把图像看作具有不同灰度级的两类区域(目标区域和背景区域)的组合,选取一个比较合理的阈值
    thresh = filters.threshold_otsu(mg_gray)
    
    # ret,thresh = cv2.threshold(img,cv2.THRESH_BINARY)
    # 根据阈值分割
    TTTT = np.zeros((w, h))
    dst = (mg_gray >= thresh) * 255.0
    
    for i in range(0, w):
        for j in range(0, h):
            if mask[i][j] > 0 and dst[i][j] > 0:
                TTTT[i][j] = 255
    
    
    ## 固定阈值
    
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
    blurred = cv2.GaussianBlur(gray, (11, 11), 0)
    
    # threshold the image to reveal light regions in the
    # blurred image
    
    #y = blurred[:,:,2]/(blurred[:,:,1] + blurred[:,:,2] + blurred[:,:,3])
    
    th = cv2.threshold(blurred, 200, 255, cv2.THRESH_BINARY)[1]
    
    
    for i in range(0, w):
        for j in range(0, h):
            if TTTT[i][j]>0 or th[i][j]>0:
                TTTT[i][j]=255
    
    
    
    
    image2 = np.concatenate([TTTT, mask, dst], axis=1)
    #cv2.imwrite(r"D:\code dp\YXTX\data\New\TTTT2.jpg", TTTT)
    plt.set_cmap("binary")
    # plt.imshow(TTTT)
    # plt.imshow(mask)
    # plt.imshow(dst)
    #plt.imshow(image2)
    
    plt.show()
    cv2.imwrite('../data/New/8_mask+uzhi.jpg',TTTT)
    
    展开全文
  • 眼镜反光检测

    千次阅读 2019-08-07 15:40:29
    二:对与已经识别出来的反光模块,我们就可以做反光消除了,具体代码如下,main函数 #include #include #include #include "FastDigitalImageInpainting.hpp" std::map, std::string> path = { {"Image", ...

    参考文章:https://blog.csdn.net/weiwei9363/article/details/85046877#_45

    首先我们可以手机一批数据使用数据训练一哥网络模型,

    测试代码(.h5模型在CSDN里可以下载)

    
    import tensorflow as tf
    import os
    import glob
    from skimage import io
    import matplotlib.pyplot as plt 
    
    os.environ['CUDA_VISIBLE_DEVICES'] = '1'
    
    import numpy as np
    from keras.layers import  Input,Conv2D,BatchNormalization,Activation,Subtract
    from keras.models import Model, load_model
    from keras.callbacks import CSVLogger, ModelCheckpoint, LearningRateScheduler
    from keras.optimizers import Adam
    import keras.backend as K
    
    model = load_model('FCN_baseline.h5')
    
    img = io.imread('./CVC-612/bbdd_png/7.jpg')
    img = img.astype('float') / 255.0
    img = np.expand_dims(img, axis=0)
    
    specular_mask = model.predict(img)
    th = 0.4
    specular_mask[specular_mask > 0.6] = 1.0
    specular_mask[specular_mask <= 0.6] = 0
    
    plt.subplot(1,2,1)
    plt.imshow(img[0, :,:,:])
    plt.subplot(1,2,2)
    plt.imshow(specular_mask[0, :,:,0], cmap='gray')
    plt.show()

    效果:

    -------------------------------》输出=================》》

    效果还可以,但是速度比较慢。

    二:对与已经识别出来的反光模块,我们就可以做反光消除了,具体代码如下,main函数

    #include <iostream>
    #include <map>
    #include <opencv2/opencv.hpp>
    #include "FastDigitalImageInpainting.hpp"
    
    std::map<std::string, std::string> path =
    {
    	{"Image", "G:/反光/Fast-Digital-Image-Inpainting-master/data/image.png"},
    	{"Mask", "G:/反光/Fast-Digital-Image-Inpainting-master/data/mask.png"},
    	{"Output", "G:/反光/Fast-Digital-Image-Inpainting-master/data/inpaint.png"}
    };
    
    void main()
    {
    	cv::Mat src = cv::imread(path["Image"]);
    	cv::Mat mask = cv::imread(path["Mask"]);
    	cv::Mat res;
    
    	cv::imshow("src", src);
    	cv::imshow("mask", mask);
    	cv::waitKey(1);
    
    	std::cout << "inpainting...";
    	inpaint(src, mask, K, res, 500);
    	std::cout << " done!" << std::endl;
    
    	cv::imwrite(path["Output"], res);
    	cv::imshow("Convolutional Inpainting (Result)", res);
    	cv::waitKey();
    }

    头文件FastDigitalImageInpainting.hpp

    #pragma once
    
    #include <opencv2/opencv.hpp>
    
    static const float a(0.073235f);
    static const float b(0.176765f);
    static const cv::Mat K = (cv::Mat_<float>(3, 3) << a, b, a, b, 0.0f, b, a, b, a);
    
    void inpaint(const cv::Mat &src, const cv::Mat &mask, const cv::Mat kernel, cv::Mat &dst, int maxNumOfIter = 100)
    {
    	assert(src.type() == mask.type() && mask.type() == CV_8UC3);
    	assert(src.size() == mask.size());
    	assert(kernel.type() == CV_32F);
    
    	// fill in the missing region with the input's average color
    	auto avgColor = cv::sum(src) / (src.cols * src.rows);
    	cv::Mat avgColorMat(1, 1, CV_8UC3);
    	avgColorMat.at<cv::Vec3b>(0, 0) = cv::Vec3b(avgColor[0], avgColor[1], avgColor[2]);
    	cv::resize(avgColorMat, avgColorMat, src.size(), 0.0, 0.0, cv::INTER_NEAREST);
    	cv::Mat result = (mask / 255).mul(src) + (1 - mask / 255).mul(avgColorMat);
    
    	// convolution
    	int bSize = K.cols / 2;
    	cv::Mat kernel3ch, inWithBorder;
    	result.convertTo(result, CV_32FC3);
    	cv::cvtColor(kernel, kernel3ch, cv::COLOR_GRAY2BGR);
    
    	cv::copyMakeBorder(result, inWithBorder, bSize, bSize, bSize, bSize, cv::BORDER_REPLICATE);
    	cv::Mat resInWithBorder = cv::Mat(inWithBorder, cv::Rect(bSize, bSize, result.cols, result.rows));
    
    	const int ch = result.channels();
    	for (int itr = 0; itr < maxNumOfIter; ++itr)
    	{
    		cv::copyMakeBorder(result, inWithBorder, bSize, bSize, bSize, bSize, cv::BORDER_REPLICATE);
    
    		for (int r = 0; r < result.rows; ++r)
    		{
    			const uchar *pMask = mask.ptr(r);
    			float *pRes = result.ptr<float>(r);
    			for (int c = 0; c < result.cols; ++c)
    			{
    				if (pMask[ch * c] == 0)
    				{
    					cv::Rect rectRoi(c, r, K.cols, K.rows);
    					cv::Mat roi(inWithBorder, rectRoi);
    
    					auto sum = cv::sum(kernel3ch.mul(roi));
    					pRes[ch * c + 0] = sum[0];
    					pRes[ch * c + 1] = sum[1];
    					pRes[ch * c + 2] = sum[2];
    				}
    			}
    		}
    
    		// for debugging
    		cv::imshow("Inpainting...", result / 255.0f);
    		cv::waitKey(1);
    	}
    
    	result.convertTo(dst, CV_8UC3);
    }

    运行效果如下:

     

     

     

    展开全文
  • 金属工件作为制造业中不可或缺的重要组成部分,其表面瑕疵不但影响美观,更会影响工件的使用性能,使产品安全性降低,由于这些工件表面光滑,同时具有高反光等特性,检测时会影响被测物的特征提取,无论是人工检测...

     

    金属工件

    金属工件作为制造业中不可或缺的重要组成部分,其表面瑕疵不但影响美观,更会影响工件的使用性能,使产品安全性降低,由于这些工件表面光滑,同时具有高反光等特性,检测时会影响被测物的特征提取,无论是人工检测还是机器检测都有很大的难度。

    为了解决高反光金属工件表面缺陷检测的问题,维视智造通过AI VisionLab视觉开放实验室中庞大的视觉检测资源库,快速出具了视觉解决方案,并进行了评估与验证。

    该方案区别于传统的视觉算法,优化后的算法可有效的解决图像采集时出现的高反光问题,同时可识别出产品划痕、裂纹、凹坑等缺陷类型,提供了更高的准确性,为后续的缺陷检测提供数据支持,提高生产效率。

    高反光金属工件表面缺陷检测案例

    项目需求

    Project Requirements

     

    金属棒材

    客户待检测产品为钛合金材质棒材,检测长度为215mm,直径约10mm,需要检测产品表面裂纹、坑点、表面啃伤、表面氧化皮等缺陷。检测时产品直线通过,并需要>6米/分钟。

    圆柱类金属件表面缺陷分布具有随机性和多样性,而金属件的表面纹理分布无规律,在缺陷检测时容易产生干扰,使得工件图像中夹杂较多的高光噪声,从而提取出很多虚假的目标缺陷,最终造成误检。

    针对这一系列问题,维视智造设计的专用光源系统和光照方式,可以完美的解决高反光造成的噪声问题。该系统独特的照明设计充分解决了金属表面的反光问题,即便是肉眼不易发现的细小缺陷也可轻松检测出来。

    产品检测评估

    Product Testing And Evaluation

    表面氧化

    检测表面氧化缺陷

    坑点缺陷

    检测表面坑点缺陷

    由于工业现场环境复杂,任何一个小的变动都可能会涉及到整个项目的改造,从而导致项目周期长,实施成本高。维视智造根据工业现场情况,搭建了一套设计合理,运行稳定,安全可靠的高反光金属工件检测系统。

    该系统采用了更加可靠的智能视觉算法,即使在光照有微小变化的同时,也能保证各种缺陷都能准确的被检测出来。

    智能视觉算法

    其中,该系统采用了VisionBank SVS智能视觉软件,软件界面操作简单,易于掌握。自带的线状缺陷检测功能模块和自适应缺陷检测功能模块能够准确的识别圆柱类金属件表面检测需求。

    智能视觉软件

    维视智造高反光金属工件检测系统可应用于线缆、带钢、薄膜、玻璃、造纸、铝板带、铝箔、铜箔、无纺布等整个制造过程中。并在生产制造过程中对产品进行全方位检测,以确保出厂产品的品质要求,从而提高产品质量和工作效率。

    维视智造凭借在机器视觉近20年的经验,为生产制造行业提供前沿的视觉技术和方案,助力企业智能化升级和改造。

    展开全文
  • 反射衣服检测和数据集yolov5 施工人员穿戴检测yolov5 作者是雷雷 yolov5 detect qq群(已满):980489677 yolov5检测qq2群:710514100 数据集下载链接详见说明-请参阅有关数据集下载链接的说明! 演示 数据标签...
  • 工作服(反光衣)检测数据集和yolov4-5检测模型 目录 0.摘要 1.开源项目github链接 2.数据集详细情况 3.工作服(反光衣)数据集扩充方案 0.摘要 本文开源1个工作服(反光衣)检测数据集(含标注)和预训练模型,此...

                                                                安全帽-工作服(反光衣)检测数据集和yolov4-5检测模型

    目录

    0.摘要

    1.开源项目github链接

    2.数据集详细情况

    3.工作服(反光衣)数据集扩充方案

    4.测试效果

    5.说明


    0.摘要

    本文开源1个工作服(反光衣)检测数据集(含标注)预训练模型,此项目已经上传github,欢迎star。

    工作服(反光衣)-安全帽检测(实用的目标检测) qq群: 980489677  qq2群:710514100

    CVAT使用标注教程:https://blog.csdn.net/LEILEI18A/article/details/113385510

    此项目可用于施工区域or危险区域检测人员!!!

    1.开源项目github链接

    https://github.com/gengyanlei/reflective-clothes-detect

    开源模型为yolov5s模型

    2.数据集详细情况

    本文开源工作服(反光衣)检测数据集图像1083张(含xml标注):包含不同颜色、(警察-环保-工地-海边(救生衣))等场景;

    CVAT使用标注教程:https://blog.csdn.net/LEILEI18A/article/details/113385510

    3.工作服(反光衣)数据集扩充方案

    (1)基于yolov4训练反光衣1083张图像,获得反光衣检测模型;

    (2)基于(1)的模型对SHWD数据集进行类别扩充,获得4类标注(安全帽2类,反光衣2类,整体人[由coco模型提供])

    (3)基于SHWD数据集训练即可!

    注意:

    yolov4 可以对空label的图像进行训练;

    yolov5 也可以对空label的图像进行训练;

    4.测试效果

    最新5类检测效果:

     

     

    5.说明

    (1)本数据集仅学术探索!

    展开全文
  • 提出了一种基于Zernike矩改进的亚像素边缘提取的工件缺陷检测算法。 对图像进行小波分解,并对分解的各频段信息分别利用不同算法进行预处理,重构图像后可以有效地滤除图像噪声,增强目标信息;利用改进的Zernike矩亚...
  • 工地安全帽反光衣穿戴检测 1. 前言 随着工地信息化管理水平越来越高,工人的安全管理越来越严格,需要使用一些辅助手段来实现工人安全自动化管理。工地实名制通道对工人身份识别的同时可以实现对工人是否穿戴安全帽...
  • 7 openVINO 反光背心和安全帽检测

    千次阅读 2019-06-28 11:26:18
    原文地址:https://github.com/intel-iot-devkit/safety-gear-detector-python git上下载慢的话,可以从此链接下载demo程序及模型 and 视频文件 ... 以下是自动翻译的 ...图1:能够检测人员以及他们是否在视频中佩...
  • 内窥镜去反光的论文整理

    千次阅读 热门讨论 2019-05-24 16:58:50
    文章目录内窥镜去反光的论文整理Detection and correction of specular reflections for automatic surgical tool segmentation in thoracoscopic images反光检测反光修复实验结果Automatic detection and ...
  • 点击上方“3D视觉工坊”,选择“星标”干货第一时间送达检查光亮、反光的零件是否有缺陷,需要采用新颖的照明和成像技术的自动成像系统汽车配件和浴室设备等部件通常光亮,镀铬。为了检验这些零件,...
  • 对光照、阴影和反光具有鲁棒性的变化检测算法及实现 http://www.cqvip.com/Main/Detail.aspx?id=8293930
  • 参考了博客,是一个口罩的识别。 在用标注工具标注自己的数据集后,转换数据集格式,voc的XML格式转换csv格式,csv再转成tensorflow所需要的标签和图片合体的record格式。 XML-CSV 网盘 vln2 CSV-record 源码中就给...
  • 此demo的python源代码 模型person-detection-retail-0013 示例视频文件 相关文章介绍 https://mp.csdn.net/postedit/93968520
  • 智慧工地反光衣识别系统对工地施工区域的人员反光衣穿着进行监测,当检测到有人未按规定穿反光衣时,立即触发告警通知安全监管人员。 反光衣主要是用来反光警示用的,一般是用荧光面料+反光带做成的,荧光面料在...
  • 很多人都在问反光识别和读数识别是什么意思,今天就给大家讲解一下反光识别和读数识别是什么意思。反光识别是指屏幕上快速切换不同的颜色,根据...用摄像机或摄像头采集含有人脸的图像或视频流,并自动在图像中检测...
  • 最近在做反光检测的时候,使用vGG的时候效果可以,但是c++没有调起来,所以使用SSD-mobilenet来训练。 我的系统win10;python3.6 1. 在GitHub上下载所需的models文件,地址:https://github.com/tensorflow/models...

空空如也

空空如也

1 2 3 4 5 ... 9
收藏数 179
精华内容 71
关键字:

反光检测