2020-01-02 22:28:56 jackzhang11 阅读数 164

在图像处理、计算机视觉领域,我们有时需要对原始图像进行预处理。图像滤波是一种比较常用的方法,通过滤波操作,就可以突出一些特征或者去除图像中不需要的成分。通过选取不同的滤波器,在原始图像上进行滑动和卷积,借助相邻的像素值就可以决定该像素最后的输出。最常见的算子分为两类,一个是图像平滑去噪功能、一个是边缘检测功能,下文中会对这两类进行展开。

平滑滤波器

1. 高斯滤波

高斯滤波器是一种可以使图像平滑的滤波器,用于去除噪声。

高斯滤波器将中心像素周围的像素按照高斯分布加权平均进行平滑化。这样的二维权值通常被称为卷积核(kernel)或者滤波器(filter)。

但是,由于图像的长宽可能不是滤波器大小的整数倍,同时我们希望输出图像的维度与输入图像一致,因此我们需要在图像的边缘补00,具体补几个00视滤波器与图像的大小关系确定,这种方法称作Zero Padding。同时,权值gg(卷积核)要进行归一化操作(g=1\sum g = 1)。

按下面的高斯分布公式计算权值: g(x,y,σ)=12πσ2 ex2+y22σ2 g(x,y,\sigma)=\frac{1}{2\pi\sigma^2}\ e^{-\frac{x^2+y^2}{2\sigma^2}}

其中 xxyy 的坐标是以当前滤波器的中心点为基准。例如中心点右上方各1格的坐标对,是(1,-1)。

标准差σ=1.3\sigma=1.388-近邻高斯滤波器近似如下: K=116 [121242121] K=\frac{1}{16}\ \left[ \begin{matrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 1 \end{matrix} \right]

2. 中值滤波

中值滤波器也是一种可以使图像平滑的滤波器,一定程度上可以去除图像的噪声,同时图像的细节也会变得模糊。这种滤波器是提取出滤波器范围内(在这里假设是3×33\times3)像素点的中值。为了保证输出图像的大小和输入一样,需要采用Zero Padding。

3. 均值滤波

与中值滤波相似,均值滤波也是用于图像降噪的。唯一与中值滤波不同的是,均值滤波对于滤波器范围内的像素点,计算他们的均值作为输出。

边缘检测滤波器

1. Sobel滤波器

Sobel滤波器可以提取特定方向(纵向或横向)的边缘,滤波器按下式定义:

水平Sobel算子: K=[121000121] K=\left[ \begin{matrix} 1&2&1\\ 0&0&0\\ -1&-2&-1 \end{matrix} \right] 竖直Sobel算子: K=[101202101] K=\left[ \begin{matrix} 1&0&-1\\2&0&-2\\ 1&0&-1 \end{matrix} \right]

Sobel算子可以近似的计算出图像相邻像素之间的梯度。假设滤波器现在滑动到背景部分,那么滤波器卷积计算得到的值就非常小;反之,如果滤波器在背景和前景分界出,那么滤波器滑过得到的卷积数值就会比较大。因此可以较好的提取出图像的边缘信息。

2. Prewitt滤波器

Prewitt滤波器也是用于边缘检测的一种滤波器,使用下式定义:

水平Prewitt算子: K=[111000111] K=\left[ \begin{matrix} -1&-1&-1\\ 0&0&0\\ 1&1&1 \end{matrix} \right] 竖直Prewitt算子: K=[101101101] K=\left[ \begin{matrix} -1&0&-1\\ -1&0&1\\ -1&0&1 \end{matrix} \right]

Prewitt算子与Sobel算子不同的是,Sobel算子考虑了权值的因素,即在中心点正上方或正下方(正左和正右)的权值为2,因为这个像素点离中心更近,而离中心远一点的斜侧方的权值为1;而Prewitt中没有这种设定。总的来说,Sobel算是对Prewitt的一种改进,效果也自然更好一点。

3. Laplacian滤波器

有别于Sobel算子和Prewitt算子这两类一阶微分滤波器,Laplacian滤波器是对图像亮度进行二次微分从而检测边缘的滤波器。由于数字图像是离散的,xx方向和yy方向的一次微分分别按照以下式子计算: Ix(x,y)=I(x+1,y)I(x,y)(x+1)x=I(x+1,y)I(x,y)  I_x(x,y)=\frac{I(x+1,y)-I(x,y)}{(x+1)-x}=I(x+1,y)-I(x,y)\ Iy(x,y)=I(x,y+1)I(x,y)(y+1)y=I(x,y+1)I(x,y)I_y(x,y) =\frac{I(x, y+1) - I(x,y)}{(y+1)-y}= I(x, y+1) - I(x,y) 所以二次微分按照以下式子计算: Ixx(x,y) =Ix(x,y)Ix(x1,y)(x+1)x=Ix(x,y)Ix(x1,y)=[I(x+1,y)I(x,y)][I(x,y)I(x1,y)]=I(x+1,y)2 I(x,y)+I(x1,y) I_{xx}(x,y) \ =\frac{I_x(x,y) - I_x(x-1,y)}{(x+1)-x}=I_x(x,y) - I_x(x-1,y)=[I(x+1, y) - I(x,y)] - [I(x, y) - I(x-1,y)] =I(x+1,y) - 2\ I(x,y) + I(x-1,y) 同理: Iyy(x,y)=I(x,y+1)2 I(x,y)+I(x,y1) I_{yy}(x,y)=I(x,y+1)-2\ I(x,y)+I(x,y-1) 因此,Laplacian 表达式如下: 2 I(x,y) =Ixx(x,y)+Iyy(x,y) =I(x1,y)+I(x,y1)4I(x,y)+I(x+1,y)+I(x,y+1)\nabla^2\ I(x,y)\ =I_{xx}(x,y)+I_{yy}(x,y)\ =I(x-1,y) + I(x,y-1) - 4 * I(x,y) + I(x+1,y) + I(x,y+1) 把这个式子表示为卷积核是下面这样的: K=[010141010] K= \left[ \begin{matrix} 0&1&0\\ 1&-4&1\\ 0&1&0 \end{matrix} \right]

参考:https://github.com/gzr2017/ImageProcessing100Wen

2019-04-26 10:50:55 u013307195 阅读数 525

滤波器主要两类:线性和非线性

线性滤波器:

使用连续窗函数内像素加权和来实现滤波,同一模式的权重因子可以作用在每一个窗口内,即线性滤波器是空间不变的。如果图像的不同部分使用不同的滤波权重因子,线性滤波器是空间可变的。因此可以使用卷积模板来实现滤波。线性滤波器对去除高斯噪声有很好的效果。常用的线性滤波器有均值滤波器和高斯平滑滤波器。

(1) 均值滤波器:
最简单均值滤波器是局部均值运算,即每一个像素只用其局部邻域内所有值的平均值来置换.

(2) 高斯平滑滤波器:
是一类根据高斯函数的形状来选择权值的线性滤波器。 高斯平滑滤波器对去除服从正态分布的噪声是很有效的。

非线性滤波器:

(1) 中值滤波器:
均值滤波和高斯滤波运算主要问题是有可能模糊图像中尖锐不连续的部分。中值滤波器的基本思想使用像素点邻域灰度值的中值来代替该像素点的灰度值,它可以去除脉冲噪声、椒盐噪声同时保留图像边缘细节。中值滤波不依赖于邻域内与典型值差别很大的值,处理过程不进行加权运算。中值滤波在一定条件下可以克服线性滤波器所造成的图像细节模糊,而对滤除脉冲干扰很有效。
(2) 边缘保持滤波器:
由于均值滤波(平滑图像外还可能导致图像边缘模糊)和中值滤波(去除脉冲噪声的同时可能将图像中的线条细节滤除)存在上述问题。边缘保持滤波器是在综合考虑了均值滤波器和中值滤波器的优缺点后发展起来的。
特点:滤波器在除噪声脉冲的同时,又不至于使图像边缘十分模糊。
过程:分别计算[i,j]的左上角子邻域、左下角子邻域、右上角子邻域、右下角子邻域的灰度分布均匀度V;然后取最小均匀度对应区域的均值作为该像素点的新灰度值。分布越均匀,均匀度V值越小。v=<(f(x, y) - f_(x, y))^2

2019-03-05 18:19:13 hhaowang 阅读数 232

Table of Contents

1.常用滤波器列表:

bilateralFilter()双边滤波器

blur()平滑滤波器

boxFilter()box平滑滤波器

filter2D()图像卷积操作

GaussianBlur()高斯滤波-平滑

medainBlur()中值滤波-去噪/平滑

Laplican()拉普拉斯(二阶)微分算子

Sobel()计算一阶、二阶、高阶微分算子

2.滤波器参数及使用

bilateralFilter()

blur()平滑

boxFilter()

filter2D()

GaussianBlur()

getDerivKernels()

getGaborKernel()

getGaussianKernel()

Laplacian()

medianBlur()

Scharr()

Sobel算子

函数列表

色彩空间转换问题,请上传送门:https://blog.csdn.net/keith_bb/article/details/53470170


.常用滤波器列表:

  • bilateralFilter()双边滤波器

  • blur()平滑滤波器

  • boxFilter()box平滑滤波器

  • filter2D()图像卷积操作

  • GaussianBlur()高斯滤波-平滑

  • medainBlur()中值滤波-去噪/平滑

  • Laplican()拉普拉斯(二阶)微分算子

  • Sobel()计算一阶、二阶、高阶微分算子

2.滤波器参数及使用

bilateralFilter()

void cv::bilateralFilter ( InputArray  src,
    OutputArray  dst,
    int  d,
    double  sigmaColor,
    double  sigmaSpace,
    int  borderType = BORDER_DEFAULT 
  )    
Python:
  dst = cv.bilateralFilter( src, d, sigmaColor, sigmaSpace[, dst[, borderType]] )

该功能将双边滤波应用于输入图像,如http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html中所述,bilateralFilter可以很好地减少不需要的噪声,同时保持边缘相当清晰。但是,与大多数过滤器相比,它非常慢。

Sigma值:为简单起见,您可以将2 sigma值设置为相同。如果它们很小(<10),过滤器将没有太大的影响,而如果它们很大(> 150),它们将具有非常强烈的效果,使图像看起来“卡通”​​。

滤波器大小:大滤波器(d> 5)非常慢,因此建议对实时应用使用d = 5,对于需要大量噪声滤波的离线应用,可能需要d = 9。

此过滤器无法正常工作。

参数

SRC 源8位或浮点,1通道或3通道图像。
DST 与src具有相同大小和类型的目标映像。
d 过滤期间使用的每个像素邻域的直径。如果它是非正数,则从sigmaSpace计算。
sigmaColor 过滤颜色空间中的西格玛。参数的值越大意味着像素邻域内的更远的颜色(参见sigmaSpace)将混合在一起,从而产生更大的半等颜色区域。
sigmaSpace 在坐标空间中过滤西格玛。较大的参数值意味着只要它们的颜色足够接近,更远的像素就会相互影响(参见sigmaColor)。当d> 0时,无论sigmaSpace如何,它都指定邻域大小。否则,d与sigmaSpace成比例。
borderType 用于外推图像外部像素的边框模式

例子

#include <iostream>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
using namespace std;
using namespace cv;
int DELAY_CAPTION = 1500;
int DELAY_BLUR = 100;
int MAX_KERNEL_LENGTH = 31;
Mat src; Mat dst;
char window_name[] = "Smoothing Demo";
int display_caption( const char* caption );
int display_dst( int delay );
int main( int argc, char ** argv )
{
    namedWindow( window_name, WINDOW_AUTOSIZE );
    const char* filename = argc >=2 ? argv[1] : "../data/lena.jpg";
    src = imread( filename, IMREAD_COLOR );
    if(src.empty())
    {
        printf(" Error opening image\n");
        printf(" Usage: ./Smoothing [image_name -- default ../data/lena.jpg] \n");
        return -1;
    }
    if( display_caption( "Original Image" ) != 0 )
    {
        return 0;
    }
    dst = src.clone();
    if( display_dst( DELAY_CAPTION ) != 0 )
    {
        return 0;
    }
    if( display_caption( "Homogeneous Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        blur( src, dst, Size( i, i ), Point(-1,-1) );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Gaussian Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        GaussianBlur( src, dst, Size( i, i ), 0, 0 );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Median Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        medianBlur ( src, dst, i );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Bilateral Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        bilateralFilter ( src, dst, i, i*2, i/2 );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    display_caption( "Done!" );
    return 0;
}
int display_caption( const char* caption )
{
    dst = Mat::zeros( src.size(), src.type() );
    putText( dst, caption,
             Point( src.cols/4, src.rows/2),
             FONT_HERSHEY_COMPLEX, 1, Scalar(255, 255, 255) );
    return display_dst(DELAY_CAPTION);
}
int display_dst( int delay )
{
    imshow( window_name, dst );
    int c = waitKey ( delay );
    if( c >= 0 ) { return -1; }
    return 0;
}

blur()平滑

Blurs an image using the normalized box filter.

The function smooths an image using the kernel:

The call blur(src, dst, ksize, anchor, borderType) is equivalent to 

boxFilter(src, dst, src.type(), anchor, true, borderType).

C++实例

/* This program demonstrates usage of the Canny edge detector */

/* include related packages */
#include "opencv2/core/utility.hpp"  
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <stdio.h>


using namespace cv;  //namesapce cv
using namespace std;  //standard namespace std

int edgeThresh = 1;
int edgeThreshScharr=1;
cv::Mat image, gray, blurImage, edge1, edge2, cedge;
const char* window_name1 = "Edge map : Canny default (Sobel gradient)";
const char* window_name2 = "Edge map : Canny with custom gradient (Scharr)";

static void onTrackbar(int, void*)
{/* define a trackbar callback, by using onTrackbar function*/
    blur(gray, blurImage, Size(3,3));
    // Run the edge detector on grayscale
    Canny(blurImage, edge1, edgeThresh, edgeThresh*3, 3);
    cedge = Scalar::all(0);
    image.copyTo(cedge, edge1);
    imshow(window_name1, cedge);
    Mat dx,dy;
    Scharr(blurImage,dx,CV_16S,1,0);
    Scharr(blurImage,dy,CV_16S,0,1);
    Canny( dx,dy, edge2, edgeThreshScharr, edgeThreshScharr*3 );
    cedge = Scalar::all(0);
    image.copyTo(cedge, edge2);
    imshow(window_name2, cedge);
}//onTrackbar

static void help()
{ /* help and info display */
    cout<<"\nThis sample demonstrates Canny edge detection\n"
        <<"Call:\n"
        <<"    /.edge [image_name -- Default is ../data/fruits.jpg]\n"<<endl;
}//help

const char* keys =
{
    "{help h||}{@image |../data/fruits.jpg|input image name}"
};

int main( int argc, const char** argv )
{
/*  the main funciton */

    help();
    CommandLineParser parser(argc, argv, keys);
    string filename = parser.get<string>(0);
    image = imread(filename, IMREAD_COLOR);

    if(image.empty()) // open file check
    {
        cout<<"Cannot read image file: "
            << filename.c_str()<<endl;
        help();
        return -1;
    }
    cedge.create(image.size(), image.type());
    cvtColor(image, gray, COLOR_BGR2GRAY);
    // Create a window
    namedWindow(window_name1, 1);
    namedWindow(window_name2, 1);
    // create a toolbar
    createTrackbar("Canny threshold default", window_name1, &edgeThresh, 100, onTrackbar);
    createTrackbar("Canny threshold Scharr", window_name2, &edgeThreshScharr, 400, onTrackbar);
    // Show the image
    onTrackbar(0, 0);
    // Wait for a key stroke; the same function arranges events processing
    waitKey(0);
    return 0;
}

An example using drawContours to clean up a background segmentation result

#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/video/background_segm.hpp"
#include <stdio.h>
#include <string>
using namespace std;
using namespace cv;
static void help()
{
    printf("\n"
            "This program demonstrated a simple method of connected components clean up of background subtraction\n"
            "When the program starts, it begins learning the background.\n"
            "You can toggle background learning on and off by hitting the space bar.\n"
            "Call\n"
            "./segment_objects [video file, else it reads camera 0]\n\n");
}
static void refineSegments(const Mat& img, Mat& mask, Mat& dst)
{
    int niters = 3;
    vector<vector<Point> > contours;
    vector<Vec4i> hierarchy;
    Mat temp;
    dilate(mask, temp, Mat(), Point(-1,-1), niters);
    erode(temp, temp, Mat(), Point(-1,-1), niters*2);
    dilate(temp, temp, Mat(), Point(-1,-1), niters);
    findContours( temp, contours, hierarchy, RETR_CCOMP, CHAIN_APPROX_SIMPLE );
    dst = Mat::zeros(img.size(), CV_8UC3);
    if( contours.size() == 0 )
        return;
    // iterate through all the top-level contours,
    // draw each connected component with its own random color
    int idx = 0, largestComp = 0;
    double maxArea = 0;
    for( ; idx >= 0; idx = hierarchy[idx][0] )
    {
        const vector<Point>& c = contours[idx];
        double area = fabs(contourArea(Mat(c)));
        if( area > maxArea )
        {
            maxArea = area;
            largestComp = idx;
        }
    }
    Scalar color( 0, 0, 255 );
    drawContours( dst, contours, largestComp, color, FILLED, LINE_8, hierarchy );
}
int main(int argc, char** argv)
{
    VideoCapture cap;
    bool update_bg_model = true;
    CommandLineParser parser(argc, argv, "{help h||}{@input||}");
    if (parser.has("help"))
    {
        help();
        return 0;
    }
    string input = parser.get<std::string>("@input");
    if (input.empty())
        cap.open(0);
    else
        cap.open(input);
    if( !cap.isOpened() )
    {
        printf("\nCan not open camera or video file\n");
        return -1;
    }
    Mat tmp_frame, bgmask, out_frame;
    cap >> tmp_frame;
    if(tmp_frame.empty())
    {
        printf("can not read data from the video source\n");
        return -1;
    }
    namedWindow("video", 1);
    namedWindow("segmented", 1);
    Ptr<BackgroundSubtractorMOG2> bgsubtractor=createBackgroundSubtractorMOG2();
    bgsubtractor->setVarThreshold(10);
    for(;;)
    {
        cap >> tmp_frame;
        if( tmp_frame.empty() )
            break;
        bgsubtractor->apply(tmp_frame, bgmask, update_bg_model ? -1 : 0);
        refineSegments(tmp_frame, bgmask, out_frame);
        imshow("video", tmp_frame);
        imshow("segmented", out_frame);
        char keycode = (char)waitKey(30);
        if( keycode == 27 )
            break;
        if( keycode == ' ' )
        {
            update_bg_model = !update_bg_model;
            printf("Learn background is in state = %d\n",update_bg_model);
        }
    }
    return 0;
}

boxFilter()

Blurs an image using the box filter.

The function smooths an image using the kernel:

Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use integral.

非标准化盒式滤波器可用于计算每个像素邻域上的各种积分特性,例如图像导数的协方差矩阵(用于密集光流算法等)。 如果需要在可变大小的窗口上计算像素总和,请使用积分。


filter2D()

Convolves an image with the kernel.

The function applies an arbitrary linear filter to an image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values according to the specified border mode.

The function does actually compute correlation, not the convolution:

That is, the kernel is not mirrored around the anchor point. If you need a real convolution, flip the kernel using flip and set the new anchor to (kernel.cols - anchor.x - 1, kernel.rows - anchor.y - 1).

The function uses the DFT-based algorithm in case of sufficiently large kernels (~11 x 11 or larger) and the direct algorithm for small kernels.

 

Applies a separable linear filter to an image.

The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .

 


GaussianBlur()

Blurs an image using a Gaussian filter.

The function convolves the source image with the specified Gaussian kernel. In-place filtering is supported.

C++实例:

#include <iostream>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
using namespace std;
using namespace cv;
int DELAY_CAPTION = 1500;
int DELAY_BLUR = 100;
int MAX_KERNEL_LENGTH = 31;
Mat src; Mat dst;
char window_name[] = "Smoothing Demo";
int display_caption( const char* caption );
int display_dst( int delay );
int main( int argc, char ** argv )
{
    namedWindow( window_name, WINDOW_AUTOSIZE );
    const char* filename = argc >=2 ? argv[1] : "../data/lena.jpg";
    src = imread( filename, IMREAD_COLOR );
    if(src.empty())
    {
        printf(" Error opening image\n");
        printf(" Usage: ./Smoothing [image_name -- default ../data/lena.jpg] \n");
        return -1;
    }
    if( display_caption( "Original Image" ) != 0 )
    {
        return 0;
    }
    dst = src.clone();
    if( display_dst( DELAY_CAPTION ) != 0 )
    {
        return 0;
    }
    if( display_caption( "Homogeneous Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        blur( src, dst, Size( i, i ), Point(-1,-1) );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Gaussian Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        GaussianBlur( src, dst, Size( i, i ), 0, 0 );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Median Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        medianBlur ( src, dst, i );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Bilateral Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        bilateralFilter ( src, dst, i, i*2, i/2 );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    display_caption( "Done!" );
    return 0;
}
int display_caption( const char* caption )
{
    dst = Mat::zeros( src.size(), src.type() );
    putText( dst, caption,
             Point( src.cols/4, src.rows/2),
             FONT_HERSHEY_COMPLEX, 1, Scalar(255, 255, 255) );
    return display_dst(DELAY_CAPTION);
}
int display_dst( int delay )
{
    imshow( window_name, dst );
    int c = waitKey ( delay );
    if( c >= 0 ) { return -1; }
    return 0;
}

getDerivKernels()

Returns filter coefficients for computing spatial image derivatives.

The function computes and returns the filter coefficients for spatial image derivatives. When ksize=CV_SCHARR, the Scharr 3×3 kernels are generated (see Scharr). Otherwise, Sobel kernels are generated (see Sobel). The filters are normally passed to sepFilter2D or to


getGaborKernel()

Returns Gabor filter coefficients.

For more details about gabor filter equations and parameters, see: Gabor Filter.


getGaussianKernel()

 

Returns Gaussian filter coefficients.

The function computes and returns the ?????×1 matrix of Gaussian filter coefficients:

Two of such generated kernels can be passed to sepFilter2D. Those functions automatically recognize smoothing kernels (a symmetrical kernel with sum of weights equal to 1) and handle them accordingly. You may also use the higher-level GaussianBlur.


Laplacian()

C++实例:

An example using Laplace transformations for edge detection

#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <ctype.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
static void help()
{
    cout <<
            "\nThis program demonstrates Laplace point/edge detection using OpenCV function Laplacian()\n"
            "It captures from the camera of your choice: 0, 1, ... default 0\n"
            "Call:\n"
            "./laplace -c=<camera #, default 0> -p=<index of the frame to be decoded/captured next>\n" << endl;
}
enum {GAUSSIAN, BLUR, MEDIAN};
int sigma = 3;
int smoothType = GAUSSIAN;
int main( int argc, char** argv )
{
    VideoCapture cap;
    cv::CommandLineParser parser(argc, argv, "{ c | 0 | }{ p | | }");
    help();
    if( parser.get<string>("c").size() == 1 && isdigit(parser.get<string>("c")[0]) )
        cap.open(parser.get<int>("c"));
    else
        cap.open(parser.get<string>("c"));
    if( cap.isOpened() )
        cout << "Video " << parser.get<string>("c") <<
            ": width=" << cap.get(CAP_PROP_FRAME_WIDTH) <<
            ", height=" << cap.get(CAP_PROP_FRAME_HEIGHT) <<
            ", nframes=" << cap.get(CAP_PROP_FRAME_COUNT) << endl;
    if( parser.has("p") )
    {
        int pos = parser.get<int>("p");
        if (!parser.check())
        {
            parser.printErrors();
            return -1;
        }
        cout << "seeking to frame #" << pos << endl;
        cap.set(CAP_PROP_POS_FRAMES, pos);
    }
    if( !cap.isOpened() )
    {
        cout << "Could not initialize capturing...\n";
        return -1;
    }
    namedWindow( "Laplacian", 0 );
    createTrackbar( "Sigma", "Laplacian", &sigma, 15, 0 );
    Mat smoothed, laplace, result;
    for(;;)
    {
        Mat frame;
        cap >> frame;
        if( frame.empty() )
            break;
        int ksize = (sigma*5)|1;
        if(smoothType == GAUSSIAN)
            GaussianBlur(frame, smoothed, Size(ksize, ksize), sigma, sigma);
        else if(smoothType == BLUR)
            blur(frame, smoothed, Size(ksize, ksize));
        else
            medianBlur(frame, smoothed, ksize);
        Laplacian(smoothed, laplace, CV_16S, 5);
        convertScaleAbs(laplace, result, (sigma+1)*0.25);
        imshow("Laplacian", result);
        char c = (char)waitKey(30);
        if( c == ' ' )
            smoothType = smoothType == GAUSSIAN ? BLUR : smoothType == BLUR ? MEDIAN : GAUSSIAN;
        if( c == 'q' || c == 'Q' || c == 27 )
            break;
    }
    return 0;
}

medianBlur()

Blurs an image using the median filter.

The function smoothes an image using the median filter with the ?????×????? aperture. Each channel of a multi-channel image is processed independently. In-place operation is supported.

C++实例:

An example using the Hough circle detector


#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
    const char* filename = argc >=2 ? argv[1] : "../data/smarties.png";
    // Loads an image
    Mat src = imread( filename, IMREAD_COLOR );
    // Check if image is loaded fine
    if(src.empty()){
        printf(" Error opening image\n");
        printf(" Program Arguments: [image_name -- default %s] \n", filename);
        return -1;
    }
    Mat gray;
    cvtColor(src, gray, COLOR_BGR2GRAY);
    medianBlur(gray, gray, 5);
    vector<Vec3f> circles;
    HoughCircles(gray, circles, HOUGH_GRADIENT, 1,
                 gray.rows/16,  // change this value to detect circles with different distances to each other
                 100, 30, 1, 30 // change the last two parameters
            // (min_radius & max_radius) to detect larger circles
    );
    for( size_t i = 0; i < circles.size(); i++ )
    {
        Vec3i c = circles[i];
        Point center = Point(c[0], c[1]);
        // circle center
        circle( src, center, 1, Scalar(0,100,100), 3, LINE_AA);
        // circle outline
        int radius = c[2];
        circle( src, center, radius, Scalar(255,0,255), 3, LINE_AA);
    }
    imshow("detected circles", src);
    waitKey();
    return 0;
}

Scharr()

Calculates the first x- or y- image derivative using Scharr operator.

The function computes the first x- or y- spatial image derivative using the Scharr operator. The call

??????(???, ???, ??????, ??, ??, ?????, ?????, ??????????)

is equivalent to

?????(???, ???, ??????, ??, ??, ??_??????, ?????, ?????, ??????????).

C++实例:

This program demonstrates usage of the Canny edge detector

#include "opencv2/core/utility.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <stdio.h>
using namespace cv;
using namespace std;
int edgeThresh = 1;
int edgeThreshScharr=1;
Mat image, gray, blurImage, edge1, edge2, cedge;
const char* window_name1 = "Edge map : Canny default (Sobel gradient)";
const char* window_name2 = "Edge map : Canny with custom gradient (Scharr)";
// define a trackbar callback
static void onTrackbar(int, void*)
{
    blur(gray, blurImage, Size(3,3));
    // Run the edge detector on grayscale
    Canny(blurImage, edge1, edgeThresh, edgeThresh*3, 3);
    cedge = Scalar::all(0);
    image.copyTo(cedge, edge1);
    imshow(window_name1, cedge);
    Mat dx,dy;
    Scharr(blurImage,dx,CV_16S,1,0);
    Scharr(blurImage,dy,CV_16S,0,1);
    Canny( dx,dy, edge2, edgeThreshScharr, edgeThreshScharr*3 );
    cedge = Scalar::all(0);
    image.copyTo(cedge, edge2);
    imshow(window_name2, cedge);
}
static void help()
{
    printf("\nThis sample demonstrates Canny edge detection\n"
           "Call:\n"
           "    /.edge [image_name -- Default is ../data/fruits.jpg]\n\n");
}
const char* keys =
{
    "{help h||}{@image |../data/fruits.jpg|input image name}"
};
int main( int argc, const char** argv )
{
    help();
    CommandLineParser parser(argc, argv, keys);
    string filename = parser.get<string>(0);
    image = imread(filename, IMREAD_COLOR);
    if(image.empty())
    {
        printf("Cannot read image file: %s\n", filename.c_str());
        help();
        return -1;
    }
    cedge.create(image.size(), image.type());
    cvtColor(image, gray, COLOR_BGR2GRAY);
    // Create a window
    namedWindow(window_name1, 1);
    namedWindow(window_name2, 1);
    // create a toolbar
    createTrackbar("Canny threshold default", window_name1, &edgeThresh, 100, onTrackbar);
    createTrackbar("Canny threshold Scharr", window_name2, &edgeThreshScharr, 400, onTrackbar);
    // Show the image
    onTrackbar(0, 0);
    // Wait for a key stroke; the same function arranges events processing
    waitKey(0);
    return 0;
}

 


Sobel算子

Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.

In all cases except one, the ?????×????? separable kernel is used to calculate the derivative. When ????? = ?, the 3×1 or 1×3 kernel is used (that is, no Gaussian smoothing is done). ksize = 1 can only be used for the first or the second x- or y- derivatives.

C++实例:

Sample code using Sobel and/or Scharr OpenCV functions to make a simple Edge Detector

#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
  cv::CommandLineParser parser(argc, argv,
                               "{@input   |../data/lena.jpg|input image}"
                               "{ksize   k|1|ksize (hit 'K' to increase its value)}"
                               "{scale   s|1|scale (hit 'S' to increase its value)}"
                               "{delta   d|0|delta (hit 'D' to increase its value)}"
                               "{help    h|false|show help message}");
  cout << "The sample uses Sobel or Scharr OpenCV functions for edge detection\n\n";
  parser.printMessage();
  cout << "\nPress 'ESC' to exit program.\nPress 'R' to reset values ( ksize will be -1 equal to Scharr function )";
  // First we declare the variables we are going to use
  Mat image,src, src_gray;
  Mat grad;
  const String window_name = "Sobel Demo - Simple Edge Detector";
  int ksize = parser.get<int>("ksize");
  int scale = parser.get<int>("scale");
  int delta = parser.get<int>("delta");
  int ddepth = CV_16S;
  String imageName = parser.get<String>("@input");
  // As usual we load our source image (src)
  image = imread( imageName, IMREAD_COLOR ); // Load an image
  // Check if image is loaded fine
  if( image.empty() )
  {
    printf("Error opening image: %s\n", imageName.c_str());
    return 1;
  }
  for (;;)
  {
    // Remove noise by blurring with a Gaussian filter ( kernel size = 3 )
    GaussianBlur(image, src, Size(3, 3), 0, 0, BORDER_DEFAULT);
    // Convert the image to grayscale
    cvtColor(src, src_gray, COLOR_BGR2GRAY);
    Mat grad_x, grad_y;
    Mat abs_grad_x, abs_grad_y;
    Sobel(src_gray, grad_x, ddepth, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
    Sobel(src_gray, grad_y, ddepth, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
    // converting back to CV_8U
    convertScaleAbs(grad_x, abs_grad_x);
    convertScaleAbs(grad_y, abs_grad_y);
    addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
    imshow(window_name, grad);
    char key = (char)waitKey(0);
    if(key == 27)
    {
      return 0;
    }
    if (key == 'k' || key == 'K')
    {
      ksize = ksize < 30 ? ksize+2 : -1;
    }
    if (key == 's' || key == 'S')
    {
      scale++;
    }
    if (key == 'd' || key == 'D')
    {
      delta++;
    }
    if (key == 'r' || key == 'R')
    {
      scale =  1;
      ksize = -1;
      delta =  0;
    }
  }
  return 0;
}


函数列表

void  cv::bilateralFilter (InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace, int borderType=BORDER_DEFAULT)
  Applies the bilateral filter to an image. 
 
void  cv::blur (InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT)
  Blurs an image using the normalized box filter. 
 
void  cv::boxFilter (InputArray src, OutputArray dst, int ddepth, Size ksize, Point anchor=Point(-1,-1), bool normalize=true, int borderType=BORDER_DEFAULT)
  Blurs an image using the box filter. 
 
void  cv::buildPyramid (InputArray src, OutputArrayOfArrays dst, int maxlevel, int borderType=BORDER_DEFAULT)
  Constructs the Gaussian pyramid for an image. 
 
void  cv::dilate (InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar &borderValue=morphologyDefaultBorderValue())
  Dilates an image by using a specific structuring element. 
 
void  cv::erode (InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar &borderValue=morphologyDefaultBorderValue())
  Erodes an image by using a specific structuring element. 
 
void  cv::filter2D (InputArray src, OutputArray dst, int ddepth, InputArray kernel, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT)
  Convolves an image with the kernel. 
 
void  cv::GaussianBlur (InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0, int borderType=BORDER_DEFAULT)
  Blurs an image using a Gaussian filter. 
 
void  cv::getDerivKernels (OutputArray kx, OutputArray ky, int dx, int dy, int ksize, bool normalize=false, int ktype=CV_32F)
  Returns filter coefficients for computing spatial image derivatives. 
 
Mat  cv::getGaborKernel (Size ksize, double sigma, double theta, double lambd, double gamma, double psi=CV_PI *0.5, int ktype=CV_64F)
  Returns Gabor filter coefficients.
 
Mat  cv::getGaussianKernel (int ksize, double sigma, int ktype=CV_64F)
  Returns Gaussian filter coefficients. 
 
Mat  cv::getStructuringElement (int shape, Size ksize, Point anchor=Point(-1,-1))
  Returns a structuring element of the specified size and shape for morphological operations. 
 
void  cv::Laplacian (InputArray src, OutputArray dst, int ddepth, int ksize=1, double scale=1, double delta=0, int borderType=BORDER_DEFAULT)
  Calculates the Laplacian of an image. 
 
void  cv::medianBlur (InputArray src, OutputArray dst, int ksize)
  Blurs an image using the median filter. 
 
static Scalar  cv::morphologyDefaultBorderValue ()
  returns "magic" border value for erosion and dilation. It is automatically transformed to Scalar::all(-DBL_MAX) for dilation. 
 
void  cv::morphologyEx (InputArray src, OutputArray dst, int op, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar &borderValue=morphologyDefaultBorderValue())
  Performs advanced morphological transformations. More...
 
void  cv::pyrDown (InputArray src, OutputArray dst, const Size &dstsize=Size(), int borderType=BORDER_DEFAULT)
  Blurs an image and downsamples it. 
 
void  cv::pyrMeanShiftFiltering (InputArray src, OutputArray dst, double sp, double sr, int maxLevel=1, TermCriteriatermcrit=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1))
  Performs initial step of meanshift segmentation of an image. 
 
void  cv::pyrUp (InputArray src, OutputArray dst, const Size &dstsize=Size(), int borderType=BORDER_DEFAULT)
  Upsamples an image and then blurs it. More...
 
void  cv::Scharr (InputArray src, OutputArray dst, int ddepth, int dx, int dy, double scale=1, double delta=0, int borderType=BORDER_DEFAULT)
  Calculates the first x- or y- image derivative using Scharr operator. More...
 
void  cv::sepFilter2D (InputArray src, OutputArray dst, int ddepth, InputArray kernelX, InputArray kernelY, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT)
  Applies a separable linear filter to an image. 
 
void  cv::Sobel (InputArray src, OutputArray dst, int ddepth, int dx, int dy, int ksize=3, double scale=1, double delta=0, int borderType=BORDER_DEFAULT)
  Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator. 
 
void  cv::spatialGradient (InputArray src, OutputArray dx, OutputArray dy, int ksize=3, int borderType=BORDER_DEFAULT)
  Calculates the first order image derivative in both x and y using a Sobel operator. 
 
void  cv::sqrBoxFilter (InputArray _src, OutputArray _dst, int ddepth, Size ksize, Point anchor=Point(-1, -1), bool normalize=true, int borderType=BORDER_DEFAULT)
  Calculates the normalized sum of squares of the pixel values overlapping the filter. 

原文链接:

https://docs.opencv.org/3.4.3/d4/d86/group__imgproc__filter.html#gae84c92d248183bd92fa713ce51cc3599

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2018-07-27 23:56:07 fanzonghao 阅读数 183

一般来说,图像的能量主要集中在其低频部分,噪声所在的频段主要在高频段,同时图像中的细节信息也主要集中在其高频部分,因此,如何去掉高频干扰同时又保持细节信息是关键。为了去除噪声,有必要对图像进行平滑,可以采用低通滤波的方法去除高频干扰。图像平滑包括空域法和频域法两大类。在空域法中,图像平滑的常用方法是采用均值滤波或中值滤波。对于均值滤波,它是用一个有奇数点的滑动窗口在图像上滑动,将窗口中心点对应的图像像素点的灰度值用窗口内的各个点的灰度值的平均值代替,如果滑动窗口规定了取均值过程中窗口各个像素点所占的权重,也就是各个像素点的系数,这时候就称为加权均值滤波;对于中值滤波,对应的像素点的灰度值用窗口内的中间值代替。 

一,平滑均值滤波,奇数尺寸,参数和为1,缺点没有去除噪声,反而让图像模糊,代码,

"""
平滑滤波
"""
def average_filter():
    img=cv2.imread('./data/opencv_logo.png')
    kernel=np.ones(shape=(5,5),dtype=np.float32)/25
    dst=cv2.filter2D(src=img,ddepth=-1,kernel=kernel)
    plt.subplot(121)
    plt.imshow(img)
    plt.title('original')
    plt.axis('off')
    plt.subplot(122)
    plt.imshow(dst)
    plt.title('Average')
    plt.axis('off')
    plt.show()

打印结果:

二,平滑高斯滤波,模拟人眼关注中心区域,有效去除高斯噪声

"""
高斯滤波
"""
def image_gauss():
    img = cv2.imread('./data/img.png')

    gauss_img = cv2.GaussianBlur(img, (7, 7),0)
    plt.subplot(121)
    plt.imshow(img)
    plt.title('original')
    plt.axis('off')
    plt.subplot(122)
    plt.imshow(gauss_img)
    plt.title('gauss_img')
    plt.axis('off')
    plt.show()

打印结果:

三,中值滤波,卷积域内的像素值从小到大排序,取中间值作为卷积输出,有效去除椒盐噪声

"""
中值滤波
"""
def image_median():
    img = cv2.imread('./data/img1.png')

    median_img = cv2.medianBlur(img,5)
    plt.subplot(121)
    plt.imshow(img)
    plt.title('original')
    plt.axis('off')
    plt.subplot(122)
    plt.imshow(median_img)
    plt.title('medians_img')
    plt.axis('off')
    plt.show()

打印结果:

四,Sobel算子

def Sobel(src, ddepth, dx, dy, dst=None, ksize=None, scale=None, delta=None, borderType=None)

Sobel算子依然是一种过滤器,只是其是带有方向的。

前四个是必须的参数:

  • 第一个参数是需要处理的图像;
  • 第二个参数是图像的深度,-1表示采用的是与原图像相同的深度。目标图像的深度必须大于等于原图像的深度;
  • dx和dy表示的是求导的阶数,0表示这个方向上没有求导,一般为0、1、2。 
img=cv2.imread('img.jpg')
print(img.shape)
gray=cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)

#[[-1,0,1],
# [-2,0,2],
# [-1,0,1]]
solber_x=cv2.Sobel(gray,cv2.CV_64F,1,0,ksize=3)
solber_x=cv2.convertScaleAbs(solber_x)
cv2.imshow('solber_x',solber_x)
cv2.waitKey(0)

#[[-1,-2,-1],
# [0,0,0],
# [1,2,1]]
solber_y=cv2.Sobel(gray,cv2.CV_64F,0,1,ksize=3)
solber_y=cv2.convertScaleAbs(solber_y)
cv2.imshow('solber_y',solber_y)
cv2.waitKey(0)
solber_xy=cv2.addWeighted(solber_x,1,solber_y,1,0)
cv2.imshow('solber_xy',solber_xy)
cv2.waitKey(0)

五,傅里叶变换用来分析各种滤波器的频率特性,图片中的边缘点和噪声可看成是高频分量,因为变化明显,没有很大变化的就看成低频分量

https://docs.opencv.org/master/de/dbc/tutorial_py_fourier_transform.html

"""
傅利叶变换
"""
def FFT():
    img = cv2.imread('./data/img3.png', 0)
    f = np.fft.fft2(img)
    fshift = np.fft.fftshift(f)
    magnitude_spectrum = 20 * np.log(np.abs(fshift))
    plt.subplot(121), plt.imshow(img, cmap='gray')
    plt.title('Input Image'), plt.xticks([]), plt.yticks([])
    plt.subplot(122), plt.imshow(magnitude_spectrum, cmap='gray')
    plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([])
    plt.show()

在中间部分更亮,表明低频分量多

用60×60窗口去掉低频分量

def FFT():
    img = cv2.imread('./data/img3.png', 0)
    f = np.fft.fft2(img)
    fshift = np.fft.fftshift(f)
    # magnitude_spectrum = 20 * np.log(np.abs(fshift))
    # plt.subplot(121), plt.imshow(img, cmap='gray')
    # plt.title('Input Image'), plt.xticks([]), plt.yticks([])
    # plt.subplot(122), plt.imshow(magnitude_spectrum, cmap='gray')
    # plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([])
    # plt.show()

    rows, cols = img.shape
    crow, ccol = int(rows / 2), int(cols / 2)
    fshift[crow - 30:crow + 30, ccol - 30:ccol + 30] = 0
    f_ishift = np.fft.ifftshift(fshift)
    img_back = np.fft.ifft2(f_ishift)
    img_back = np.abs(img_back)
    plt.subplot(131), plt.imshow(img, cmap='gray')
    plt.title('Input Image'), plt.xticks([]), plt.yticks([])
    plt.subplot(132), plt.imshow(img_back, cmap='gray')
    plt.title('Image after HPF'), plt.xticks([]), plt.yticks([])
    plt.subplot(133), plt.imshow(img_back)
    plt.title('Result in JET'), plt.xticks([]), plt.yticks([])
    plt.show()

可见只保留了人的边缘信息,证明了中间亮的那些部分是低频分量。

六,Laplacian为啥是高通滤波器

def laplace_high_pass():
    # simple averaging filter without scaling parameter
    mean_filter = np.ones((3,3))
    # creating a gaussian filter
    x = cv2.getGaussianKernel(5,10)
    gaussian = x*x.T
    # different edge detecting filters
    # scharr in x-direction
    scharr = np.array([[-3, 0, 3],
                       [-10,0,10],
                       [-3, 0, 3]])
    # sobel in x direction
    sobel_x= np.array([[-1, 0, 1],
                       [-2, 0, 2],
                       [-1, 0, 1]])
    # sobel in y direction
    sobel_y= np.array([[-1,-2,-1],
                       [0, 0, 0],
                       [1, 2, 1]])
    # laplacian
    laplacian=np.array([[0, 1, 0],
                        [1,-4, 1],
                        [0, 1, 0]])
    filters = [mean_filter, gaussian, laplacian, sobel_x, sobel_y, scharr]
    filter_name = ['mean_filter', 'gaussian','laplacian', 'sobel_x', \
                    'sobel_y', 'scharr_x']
    fft_filters = [np.fft.fft2(x) for x in filters]
    fft_shift = [np.fft.fftshift(y) for y in fft_filters]
    mag_spectrum = [np.log(np.abs(z)+1) for z in fft_shift]
    for i in range(6):
        plt.subplot(2,3,i+1),plt.imshow(mag_spectrum[i],cmap = 'gray')
        plt.title(filter_name[i]), plt.xticks([]), plt.yticks([])
    plt.show()

打印结果:

中间有白色的部分代表是低通滤波器,中间有黑色的部分代表是高通滤波器。

七,图像锐化

图像的边缘信息在图像风险和人的视觉中都是非常重要的,物体的边缘是以图像局部特性不连续的形式出现的。前面介绍的图像滤波对于消除噪声是有益的,但往往使图像中的边界、轮廓变的模糊,为了减少这类不利效果的影响,就需要利用图像锐化技术,使图像的边缘变得更加鲜明。图像锐化处理的目的就是为了使图像的边缘、轮廓线以及图像的细节变得清晰,经过平滑处理后的图像变得模糊的根本原因是因为图像的像素受到了平均或积分,因此对其进行逆运算(如微分运算)就可以使图像变得清晰。从频率域来考虑,图像模糊的实质是因为其高频分量被衰减,因此可以用高通滤波器使图像清晰。

八.例子 提取条形码

1.利用梯度操作是如何检测出图片的条形码;

2.利用均值滤波作用于梯度图片,平滑图片中的高频噪声;

3.二值化;

4.利用函数cv2.getStructuringElement构造一个矩形核做闭运算,这个核的宽度大于高度,因此允许我们缩小条形码垂直条带之间的间隙;

5.腐蚀,膨胀去掉大部分独立斑点;

6.找出最大轮廓,提取。

import cv2
import matplotlib.pyplot as plt
import numpy as np
import imutils
path='./barcode.png'
image = cv2.imread(path)
image_h, image_w,_=image.shape
print('======opencv read data type========')
print(image.dtype)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# # 计算图片x和y方向的Scharr梯度大小
ddepth = cv2.CV_32F if imutils.is_cv2() else cv2.CV_32F
gradX = cv2.Sobel(gray, ddepth=ddepth , dx=1, dy=0, ksize=-1)
print('gradX.dtype:',gradX.dtype)

# # #debug
# gradX = cv2.convertScaleAbs(gradX)
# print(gradX.dtype)
# cv2.imshow('gradX',gradX)
# cv2.waitKey(0)

gradY = cv2.Sobel(gray, ddepth=ddepth , dx=0, dy=1, ksize=-1)
# # #debug
# gradY = cv2.convertScaleAbs(gradY)
# print(gradY.dtype)
# cv2.imshow('gradY',gradY)
# cv2.waitKey(0)

# 用x方向的梯度减去y方向的梯度
gradient = cv2.subtract(gradX,gradY)
# cv2.imshow('gradient1',gradient)
# cv2.waitKey(0)

#转回uint8
gradient = cv2.convertScaleAbs(gradient)
# print(gradient.shape)
# print(gradient.dtype)
# cv2.imshow('gradient2',gradient)
# cv2.waitKey(0)

# blur and threshold the image
blurred = cv2.blur(gradient, (9, 9))
thresh= cv2.threshold(blurred, 225, 255, cv2.THRESH_BINARY)[1]

# cv2.imshow('thresh:',thresh)
# cv2.waitKey(0)
# construct a closing kernel and apply it to the thresholded image
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (21, 7))
closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# cv2.imshow('closed:',closed)
# cv2.waitKey(0)
# perform a series of erosions and dilations
closed = cv2.erode(closed, None, iterations = 4)
closed = cv2.dilate(closed, None, iterations = 4)
# cv2.imshow('close:',closed)
# cv2.waitKey(0)

# find the contours in the thresholded image, then sort the contours
# by their area, keeping only the largest one
cnts = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# cnts = cnts[0]
c = sorted(cnts, key=cv2.contourArea, reverse=True)
c = np.squeeze(c[0])
# plt.plot(c[:, 0], c[:, 1])
# plt.show()
mask = np.zeros((image_h, image_w, 3))
dummy_mask = cv2.drawContours(mask, [c], 0, (255, 255, 255), thickness=cv2.FILLED)
cv2.imshow('dummy_mask',dummy_mask)
cv2.waitKey(0)

image_bar=(image*(np.array(dummy_mask/255).astype(np.uint8)))
cv2.imshow('image_bar',image_bar)
cv2.waitKey(0)

      

用下面这个是提取出轮廓的外接多边形然后框出来

rect=cv2.minAreaRect(c)#get center xy and w h
box = cv2.boxPoints(rect) # cv2.boxPoints(rect) for OpenCV 3.x 获取最小外接矩形的4个顶点坐标
box = np.int0(box)
print(box)
cv2.drawContours(image, [box], 0, (0, 255, 0), 3)

cv2.imshow('image',image)
cv2.waitKey(0)

九.倾斜矫正

#from imutils.perspective import four_point_transform
#import imutils
import cv2
import numpy as np
from matplotlib import pyplot as plt
import math


def Get_Outline(input_dir):
    image = cv2.imread(input_dir)
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    blurred = cv2.GaussianBlur(gray, (5, 5), 0)
    edged = cv2.Canny(blurred, 75, 200)
    return image, gray, edged


def Get_cnt(edged):
    cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    cnts = cnts[0]  # if imutils.is_cv2() else cnts[1]

    docCnt = None

    if len(cnts) > 0:
        cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
        for c in cnts:
            peri = cv2.arcLength(c, True)  # 轮廓按大小降序排序
            approx = cv2.approxPolyDP(c, 0.02 * peri, True)  # 获取近似的轮廓
            if len(approx) == 4:  # 近似轮廓有四个顶点
                docCnt = approx
                break

    return docCnt


def calculate_distance(point1, point2):
    d_x = point1[0] - point2[0]
    d_y = point1[1] - point2[1]
    distance = math.sqrt(d_x ** 2 + d_y ** 2)
    return distance


if __name__ == "__main__":
    input_dir = "gongjiaoka.png"
    image, gray, edged = Get_Outline(input_dir)
    docCnt = Get_cnt(edged)
    # print(docCnt)

    print(docCnt.reshape(4, 2))
    # result_img = four_point_transform(image, docCnt.reshape(4,2)) # 对原始图像进行四点透视变换
    # 改变变换的模式 公交卡的比例是16:9
    pts1 = np.float32(docCnt.reshape(4, 2))
    # 加入一个判断,对不同宽高采用不同的系数
    p = docCnt.reshape(4, 2)
    # plt.plot(p[:,0],p[:,1])
    # plt.show()

    # 确定长短边
    if calculate_distance(p[0], p[1]) < calculate_distance(p[0], p[3]):
        pts2 = np.float32([[0, 0], [0, 180], [320, 180], [320, 0]])
        M = cv2.getPerspectiveTransform(pts1, pts2)
        #求仿射变换矩阵
        edged_rotate = cv2.warpPerspective(edged, M, (320, 180))
        image_rotate = cv2.warpPerspective(image, M, (320, 180))


    else:
        pts2 = np.float32([[0, 0], [0, 320], [180, 320], [180, 0]])
        #求仿射变换矩阵    
        M = cv2.getPerspectiveTransform(pts1, pts2)
        edged_rotate = cv2.warpPerspective(edged, M, (180, 320))
        image_rotate = cv2.warpPerspective(image, M, (180, 320))

    cv2.imwrite('image_rotate.png',image_rotate)
    # print(result_img.shape)
    # -------画点----------
    for point in docCnt.reshape(4, 2):
        cv2.circle(image, tuple(point), 3, (0, 0, 255), 2)
    # # --------------
    cv2.imshow("original", image)
    # cv2.imshow("gray", gray)
    cv2.imshow("edged", edged)
    cv2.imshow("edged_rotate", edged_rotate)
    cv2.imshow("result_img", image_rotate)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

十.求物体尺寸

from scipy.spatial import distance as dist
from imutils import perspective
from imutils import contours
import numpy as np
import argparse
import imutils
import cv2

def midpoint(ptA, ptB):
	return ((ptA[0] + ptB[0]) * 0.5, (ptA[1] + ptB[1]) * 0.5)

path='./img/example_02.png'
#硬币长度0.955inch
WIDTH=0.955
# load the image, convert it to grayscale, and blur it slightly
image = cv2.imread(path)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)

# cv2.imwrite('gray.jpg',gray)

edged = cv2.Canny(gray, 50, 100)
edged = cv2.dilate(edged, None, iterations=1)
edged = cv2.erode(edged, None, iterations=1)

# find contours in the edge map
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
# print(len(cnts))
# print(cnts[0].shape)
pixelsPerMetric = None
orig = image.copy()

for c in cnts:
	if cv2.contourArea(c) < 100:
		continue

	box = cv2.minAreaRect(c)
	box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
	box = np.array(box, dtype="int")
	print('box:',box)
	box = perspective.order_points(box)
	cv2.drawContours(orig, [box.astype("int")], -1, (0, 255, 0), 2)

	for (x, y) in box:
		cv2.circle(orig, (int(x), int(y)), 5, (0, 0, 255), -1)

	(tl, tr, br, bl) = box
	(tltrX, tltrY) = midpoint(tl, tr)
	(blbrX, blbrY) = midpoint(bl, br)

	(tlblX, tlblY) = midpoint(tl, bl)
	(trbrX, trbrY) = midpoint(tr, br)

	# draw the midpoints on the image
	cv2.circle(orig, (int(tltrX), int(tltrY)), 5, (255, 0, 0), -1)
	cv2.circle(orig, (int(blbrX), int(blbrY)), 5, (255, 0, 0), -1)
	cv2.circle(orig, (int(tlblX), int(tlblY)), 5, (255, 0, 0), -1)
	cv2.circle(orig, (int(trbrX), int(trbrY)), 5, (255, 0, 0), -1)

	# draw lines between the midpoints
	cv2.line(orig, (int(tltrX), int(tltrY)), (int(blbrX), int(blbrY)),
		(255, 0, 255), 2)
	cv2.line(orig, (int(tlblX), int(tlblY)), (int(trbrX), int(trbrY)),
		(255, 0, 255), 2)

	# compute the Euclidean distance between the midpoints
	dA = dist.euclidean((tltrX, tltrY), (blbrX, blbrY))
	dB = dist.euclidean((tlblX, tlblY), (trbrX, trbrY))

	# if the pixels per metric has not been initialized, then
	# compute it as the ratio of pixels to supplied metric
	# (in this case, inches)
	if pixelsPerMetric is None:
		pixelsPerMetric = dB / WIDTH

	# compute the size of the object
	dimA = dA / pixelsPerMetric
	dimB = dB / pixelsPerMetric

	# draw the object sizes on the image
	cv2.putText(orig, "{:.1f}in".format(dimA),
		(int(tltrX - 15), int(tltrY - 10)), cv2.FONT_HERSHEY_SIMPLEX,
		0.65, (255, 255, 255), 2)
	cv2.putText(orig, "{:.1f}in".format(dimB),
		(int(trbrX + 10), int(trbrY)), cv2.FONT_HERSHEY_SIMPLEX,
		0.65, (255, 255, 255), 2)
cv2.imwrite('orig.jpg', orig)

2017-10-28 18:45:23 qq_30356613 阅读数 15481

滤波器作为图像处理课程的重要内容,大致可分为两类,空域滤波器和频率域滤波器。本文主要介绍常用的四种滤波器:中值滤波器、均值滤波器、高斯滤波器、双边滤波器,并基于opencv做出实现。空域的滤波器一般可以通过模板对原图像进行卷积进行,卷积的相关知识请自行学习。

理论知识:

线性滤波器表达公式:,其中均值滤波器和高斯滤波器属于线性滤波器,首先看这两种滤波器

均值滤波器:

模板:

从待处理图像首元素开始用模板对原始图像进行卷积,均值滤波直观地理解就是用相邻元素灰度值的平均值代替该元素的灰度值。

高斯滤波器:

模板:通过高斯内核函数产生的

高斯内核函数:

例如3*3的高斯内核模板:


中值滤波:同样是空间域的滤波,主题思想是取相邻像素的点,然后对相邻像素的点进行排序,取中点的灰度值作为该像素点的灰度值。

双边滤波:


C++代码实现:

static void exchange(int& a, int& b)
{	
	int t = 0;
	t = a;
	a = b;
	b = t;
}

static void bubble_sort(int* K, int lenth)
{
	for (int i = 0; i < lenth; i++)
		for (int j = i + 1; j < lenth; j++)
		{
			if (K[i]>K[j])
				exchange(K[i], K[j]);
		}
}
///产生二维的高斯内核
static cv::Mat generate_gassian_kernel(double u, double sigma, cv::Size size)
{
	int width = size.width;
	int height = size.height;
	cv::Mat gassian_kernel(cv::Size(width, height), CV_64FC1);
	double sum = 0;
	double sum_sum = 0;
	for (int i = 0; i < width; i++)
		for (int j = 0; j < height; j++)
		{
			sum = 1.0 / 2.0 / CV_PI / sigma / sigma * exp(-1.0 * ((i - width / 2)*(i - width / 2) + (j - width / 2)*(j - width / 2)) / 2.0 / sigma / sigma);
			sum_sum += sum;
			gassian_kernel.ptr<double>(i)[j] = sum;
		}
	for (int i = 0; i < width; i++)
		for (int j = 0; j < height; j++)
		{
			gassian_kernel.ptr<double>(i)[j] /= sum_sum;
		}
	return gassian_kernel;
}
///均值滤波
void lmt_main_blur(cv::Mat& img_in, cv::Mat& img_out, int kernel_size)
{
	img_out = img_in.clone();
	cv::Mat mat1;
	cv::copyMakeBorder(img_in, mat1, kernel_size, kernel_size, kernel_size, kernel_size, cv::BORDER_REPLICATE);

	int cols = mat1.cols;
	int rows = mat1.rows;
	int channels = img_out.channels();
	const uchar* const pt = mat1.ptr<uchar>(0);
	uchar* pt_out = img_out.ptr<uchar>(0);

	for (int i = kernel_size; i < rows - kernel_size; i++)
	{
		for (int j = kernel_size; j < cols - kernel_size; j++)
		{
			if (channels == 1)
			{
				long long int sum_pixel = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						sum_pixel += pt[(i + m)*cols + (j + n)];
					}
				img_out.ptr<uchar>(i - kernel_size)[j - kernel_size] = (double)sum_pixel / (kernel_size*kernel_size * 4);
			}
			else if (channels == 3)
			{
				long long int sum_pixel = 0;
				long long int sum_pixel1 = 0;
				long long int sum_pixel2 = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						sum_pixel += pt[((i + m)*cols + (j + n))*channels + 0];
						sum_pixel1 += pt[((i + m)*cols + (j + n))*channels + 1];
						sum_pixel2 += pt[((i + m)*cols + (j + n))*channels + 2];
					}
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 0] = (double)sum_pixel / (double)(kernel_size*kernel_size * 4);
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 1] = (double)sum_pixel1 / (double)(kernel_size*kernel_size * 4);
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 2] = (double)sum_pixel2 / (double)(kernel_size*kernel_size * 4);
			}
		}
	}

}
///中值滤波
void lmt_median_blur(cv::Mat& img_in, cv::Mat& img_out, int kernel_size)
{
	img_out = img_in.clone();
	cv::Mat mat1;
	cv::copyMakeBorder(img_in, mat1, kernel_size, kernel_size, kernel_size, kernel_size, cv::BORDER_REPLICATE);

	int cols = mat1.cols;
	int rows = mat1.rows;
	int channels = img_out.channels();

	cv::Mat mat[3];
	cv::Mat mat_out[3];
	cv::split(mat1, mat);
	cv::split(img_out, mat_out);
	for (int k = 0; k < 3; k++)
	{
		const uchar* const pt = mat[k].ptr<uchar>(0);
		uchar* pt_out = mat_out[k].ptr<uchar>(0);
		for (int i = kernel_size; i < rows - kernel_size; i++)
		{
			for (int j = kernel_size; j < cols - kernel_size; j++)
			{
				long long int sum_pixel = 0;
				int* K = new int[kernel_size*kernel_size * 4];
				int ker_num = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						K[ker_num] = pt[(i + m)*cols + (j + n)];
						ker_num++;
					}
				bubble_sort(K, ker_num);
				mat_out[k].ptr<uchar>(i - kernel_size)[j - kernel_size] = K[ker_num / 2];
			}
		}
	}
	cv::merge(mat_out, 3, img_out);
}
///高斯滤波
void lmt_gaussian_blur(cv::Mat& img_src, cv::Mat& img_dst, cv::Size kernel_size)
{
	img_dst = cv::Mat(cv::Size(img_src.cols, img_src.rows), img_src.type());
	int cols = img_src.cols;
	int rows = img_src.rows;
	int channels = img_src.channels();
	cv::Mat gassian_kernel = generate_gassian_kernel(0, 1, kernel_size);
	int width = kernel_size.width / 2;
	int height = kernel_size.height / 2;
	for (int i = height; i < rows - height; i++)
	{
		for (int j = width; j < cols - width; j++)
		{
			for (int k = 0; k < channels; k++)
			{
				double sum = 0.0;
				for (int m = -height; m <= height; m++)
				{
					for (int n = -width; n <= width; n++)
					{
						sum += (double)(img_src.ptr<uchar>(i + m)[(j + n)*channels + k]) * gassian_kernel.ptr<double>(height + m)[width + n];
					}
				}
				if (sum > 255.0)
					sum = 255;
				if (sum < 0.0)
					sum = 0;
				img_dst.ptr<uchar>(i)[j*channels + k] = (uchar)sum;
			}
		}
	}

	
}
///双边滤波
void lmt_bilateral_filter(cv::Mat& img_in, cv::Mat& img_out, const int r, double sigma_d, double sigma_r)
{
	int i, j, m, n, k;
	int nx = img_in.cols, ny = img_in.rows, m_nChannels = img_in.channels();
	const int w_filter = 2 * r + 1; // 滤波器边长  

	double gaussian_d_coeff = -0.5 / (sigma_d * sigma_d);
	double gaussian_r_coeff = -0.5 / (sigma_r * sigma_r);
	double  **d_metrix = new double *[w_filter];
	for (int i = 0; i < w_filter; ++i)
		d_metrix[i] = new double[w_filter];
	
	double r_metrix[256];  // similarity weight  
	img_out = cv::Mat(img_in.size(),img_in.type());
	uchar* m_imgData = img_in.ptr<uchar>(0);
	uchar* m_img_outData = img_out.ptr<uchar>(0);
	// copy the original image  
	double* img_tmp = new double[m_nChannels * nx * ny];
	for (i = 0; i < ny; i++)
		for (j = 0; j < nx; j++)
			for (k = 0; k < m_nChannels; k++)
			{
				img_tmp[i * m_nChannels * nx + m_nChannels * j + k] = m_imgData[i * m_nChannels * nx + m_nChannels * j + k];
			}

	// compute spatial weight  
	for (i = -r; i <= r; i++)
		for (j = -r; j <= r; j++)
		{
			int x = j + r;
			int y = i + r;

			d_metrix[y][x] = exp((i * i + j * j) * gaussian_d_coeff);
		}

	// compute similarity weight  
	for (i = 0; i < 256; i++)
	{
		r_metrix[i] = exp(i * i * gaussian_r_coeff);
	}

	// bilateral filter  
	for (i = 0; i < ny; i++)
		for (j = 0; j < nx; j++)
		{
			for (k = 0; k < m_nChannels; k++)
			{
				double weight_sum, pixcel_sum;
				weight_sum = pixcel_sum = 0.0;

				for (m = -r; m <= r; m++)
					for (n = -r; n <= r; n++)
					{
						if (m*m + n*n > r*r) continue;

						int x_tmp = j + n;
						int y_tmp = i + m;

						x_tmp = x_tmp < 0 ? 0 : x_tmp;
						x_tmp = x_tmp > nx - 1 ? nx - 1 : x_tmp;   // 边界处理,replicate  
						y_tmp = y_tmp < 0 ? 0 : y_tmp;
						y_tmp = y_tmp > ny - 1 ? ny - 1 : y_tmp;

						int pixcel_dif = (int)abs(img_tmp[y_tmp * m_nChannels * nx + m_nChannels * x_tmp + k] - img_tmp[i * m_nChannels * nx + m_nChannels * j + k]);
						double weight_tmp = d_metrix[m + r][n + r] * r_metrix[pixcel_dif];  // 复合权重  

						pixcel_sum += img_tmp[y_tmp * m_nChannels * nx + m_nChannels * x_tmp + k] * weight_tmp;
						weight_sum += weight_tmp;
					}

				pixcel_sum = pixcel_sum / weight_sum;
				m_img_outData[i * m_nChannels * nx + m_nChannels * j + k] = (uchar)pixcel_sum;

			} // 一个通道  

		} // END ALL LOOP  
	for (i = 0; i < w_filter; i++)
		delete[] d_metrix[i];
	delete[] d_metrix;
}

Opencv API函数实现:

opencv相关函数简介:

双边滤波函数:bilateralFilter(InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace,int borderType=BORDER_DEFAULT )

   src待滤波图像

   dst滤波后图像

   d滤波器半径

   sigmaColor滤波器值域的sigma

   sigmaSpace滤波器空间域的sigma

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

均值滤波函数:blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), intborderType=BORDER_DEFAULT );

   src待滤波图像

   dst滤波后图像

   ksize 均值滤波器的大小

   anchor均值滤波器的锚点也就是模板移动点

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

高斯滤波函数:GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0,int borderType=BORDER_DEFAULT );

   src待滤波图像

   dst滤波后图像

   ksize 高斯滤波器的大小

   sigmaX 高斯滤波器的x方向的滤波器高斯sigma

   sigmaY 高斯滤波器的y方向的滤波器高斯sigma

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

中值滤波函数:medianBlur(InputArray src, OutputArray dst, int ksize );

    src待滤波图像

    dst滤波后图像

    ksize 中值滤波器的大小

函数演示:

void bilateral_filter_show(void)
{
	cv::Mat mat1 = cv::imread("F:\\CVlibrary\\obama.jpg", CV_LOAD_IMAGE_GRAYSCALE); //灰度图加载进来,BGR->HSV 然后取H参数
	if (mat1.empty())
		return;
	cv::imshow("原图像", mat1); 
	cv::Mat src = cv::imread("F:\\CVlibrary\\obama.jpg");
	cv::imshow("原始彩色图像", src);
	std::cout << "channel = " << mat1.channels() << std::endl;
	
	cv::Mat mat3;
	cv::bilateralFilter(src, mat3, 5, 50, 50,cv::BORDER_DEFAULT);
	cv::imshow("opencv给出的双边滤波器", mat3);
	cv::Mat mat4;
	cv::blur(src, mat4, cv::Size(3, 3));
	cv::imshow("均值滤波", mat4);
	cv::Mat mat5;
	cv::GaussianBlur(src, mat5, cv::Size(5, 5), 1,1);
	cv::imshow("高斯滤波器", mat5);
	cv::Mat mat6;
	cv::medianBlur(src, mat6, 3);
	cv::imshow("中值滤波", mat6); 
	cv::Mat mat7;
	lmt_gaussian_blur(src, mat7, cv::Size(5, 5));
	cv::imshow("my gaussian image",mat7);

	cv::waitKey(0);
}







没有更多推荐了,返回首页