2019-03-05 18:19:13 hhaowang 阅读数 235

Table of Contents

1.常用滤波器列表:

bilateralFilter()双边滤波器

blur()平滑滤波器

boxFilter()box平滑滤波器

filter2D()图像卷积操作

GaussianBlur()高斯滤波-平滑

medainBlur()中值滤波-去噪/平滑

Laplican()拉普拉斯(二阶)微分算子

Sobel()计算一阶、二阶、高阶微分算子

2.滤波器参数及使用

bilateralFilter()

blur()平滑

boxFilter()

filter2D()

GaussianBlur()

getDerivKernels()

getGaborKernel()

getGaussianKernel()

Laplacian()

medianBlur()

Scharr()

Sobel算子

函数列表

色彩空间转换问题,请上传送门:https://blog.csdn.net/keith_bb/article/details/53470170


.常用滤波器列表:

  • bilateralFilter()双边滤波器

  • blur()平滑滤波器

  • boxFilter()box平滑滤波器

  • filter2D()图像卷积操作

  • GaussianBlur()高斯滤波-平滑

  • medainBlur()中值滤波-去噪/平滑

  • Laplican()拉普拉斯(二阶)微分算子

  • Sobel()计算一阶、二阶、高阶微分算子

2.滤波器参数及使用

bilateralFilter()

void cv::bilateralFilter ( InputArray  src,
    OutputArray  dst,
    int  d,
    double  sigmaColor,
    double  sigmaSpace,
    int  borderType = BORDER_DEFAULT 
  )    
Python:
  dst = cv.bilateralFilter( src, d, sigmaColor, sigmaSpace[, dst[, borderType]] )

该功能将双边滤波应用于输入图像,如http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html中所述,bilateralFilter可以很好地减少不需要的噪声,同时保持边缘相当清晰。但是,与大多数过滤器相比,它非常慢。

Sigma值:为简单起见,您可以将2 sigma值设置为相同。如果它们很小(<10),过滤器将没有太大的影响,而如果它们很大(> 150),它们将具有非常强烈的效果,使图像看起来“卡通”​​。

滤波器大小:大滤波器(d> 5)非常慢,因此建议对实时应用使用d = 5,对于需要大量噪声滤波的离线应用,可能需要d = 9。

此过滤器无法正常工作。

参数

SRC 源8位或浮点,1通道或3通道图像。
DST 与src具有相同大小和类型的目标映像。
d 过滤期间使用的每个像素邻域的直径。如果它是非正数,则从sigmaSpace计算。
sigmaColor 过滤颜色空间中的西格玛。参数的值越大意味着像素邻域内的更远的颜色(参见sigmaSpace)将混合在一起,从而产生更大的半等颜色区域。
sigmaSpace 在坐标空间中过滤西格玛。较大的参数值意味着只要它们的颜色足够接近,更远的像素就会相互影响(参见sigmaColor)。当d> 0时,无论sigmaSpace如何,它都指定邻域大小。否则,d与sigmaSpace成比例。
borderType 用于外推图像外部像素的边框模式

例子

#include <iostream>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
using namespace std;
using namespace cv;
int DELAY_CAPTION = 1500;
int DELAY_BLUR = 100;
int MAX_KERNEL_LENGTH = 31;
Mat src; Mat dst;
char window_name[] = "Smoothing Demo";
int display_caption( const char* caption );
int display_dst( int delay );
int main( int argc, char ** argv )
{
    namedWindow( window_name, WINDOW_AUTOSIZE );
    const char* filename = argc >=2 ? argv[1] : "../data/lena.jpg";
    src = imread( filename, IMREAD_COLOR );
    if(src.empty())
    {
        printf(" Error opening image\n");
        printf(" Usage: ./Smoothing [image_name -- default ../data/lena.jpg] \n");
        return -1;
    }
    if( display_caption( "Original Image" ) != 0 )
    {
        return 0;
    }
    dst = src.clone();
    if( display_dst( DELAY_CAPTION ) != 0 )
    {
        return 0;
    }
    if( display_caption( "Homogeneous Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        blur( src, dst, Size( i, i ), Point(-1,-1) );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Gaussian Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        GaussianBlur( src, dst, Size( i, i ), 0, 0 );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Median Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        medianBlur ( src, dst, i );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Bilateral Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        bilateralFilter ( src, dst, i, i*2, i/2 );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    display_caption( "Done!" );
    return 0;
}
int display_caption( const char* caption )
{
    dst = Mat::zeros( src.size(), src.type() );
    putText( dst, caption,
             Point( src.cols/4, src.rows/2),
             FONT_HERSHEY_COMPLEX, 1, Scalar(255, 255, 255) );
    return display_dst(DELAY_CAPTION);
}
int display_dst( int delay )
{
    imshow( window_name, dst );
    int c = waitKey ( delay );
    if( c >= 0 ) { return -1; }
    return 0;
}

blur()平滑

Blurs an image using the normalized box filter.

The function smooths an image using the kernel:

The call blur(src, dst, ksize, anchor, borderType) is equivalent to 

boxFilter(src, dst, src.type(), anchor, true, borderType).

C++实例

/* This program demonstrates usage of the Canny edge detector */

/* include related packages */
#include "opencv2/core/utility.hpp"  
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <stdio.h>


using namespace cv;  //namesapce cv
using namespace std;  //standard namespace std

int edgeThresh = 1;
int edgeThreshScharr=1;
cv::Mat image, gray, blurImage, edge1, edge2, cedge;
const char* window_name1 = "Edge map : Canny default (Sobel gradient)";
const char* window_name2 = "Edge map : Canny with custom gradient (Scharr)";

static void onTrackbar(int, void*)
{/* define a trackbar callback, by using onTrackbar function*/
    blur(gray, blurImage, Size(3,3));
    // Run the edge detector on grayscale
    Canny(blurImage, edge1, edgeThresh, edgeThresh*3, 3);
    cedge = Scalar::all(0);
    image.copyTo(cedge, edge1);
    imshow(window_name1, cedge);
    Mat dx,dy;
    Scharr(blurImage,dx,CV_16S,1,0);
    Scharr(blurImage,dy,CV_16S,0,1);
    Canny( dx,dy, edge2, edgeThreshScharr, edgeThreshScharr*3 );
    cedge = Scalar::all(0);
    image.copyTo(cedge, edge2);
    imshow(window_name2, cedge);
}//onTrackbar

static void help()
{ /* help and info display */
    cout<<"\nThis sample demonstrates Canny edge detection\n"
        <<"Call:\n"
        <<"    /.edge [image_name -- Default is ../data/fruits.jpg]\n"<<endl;
}//help

const char* keys =
{
    "{help h||}{@image |../data/fruits.jpg|input image name}"
};

int main( int argc, const char** argv )
{
/*  the main funciton */

    help();
    CommandLineParser parser(argc, argv, keys);
    string filename = parser.get<string>(0);
    image = imread(filename, IMREAD_COLOR);

    if(image.empty()) // open file check
    {
        cout<<"Cannot read image file: "
            << filename.c_str()<<endl;
        help();
        return -1;
    }
    cedge.create(image.size(), image.type());
    cvtColor(image, gray, COLOR_BGR2GRAY);
    // Create a window
    namedWindow(window_name1, 1);
    namedWindow(window_name2, 1);
    // create a toolbar
    createTrackbar("Canny threshold default", window_name1, &edgeThresh, 100, onTrackbar);
    createTrackbar("Canny threshold Scharr", window_name2, &edgeThreshScharr, 400, onTrackbar);
    // Show the image
    onTrackbar(0, 0);
    // Wait for a key stroke; the same function arranges events processing
    waitKey(0);
    return 0;
}

An example using drawContours to clean up a background segmentation result

#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/video/background_segm.hpp"
#include <stdio.h>
#include <string>
using namespace std;
using namespace cv;
static void help()
{
    printf("\n"
            "This program demonstrated a simple method of connected components clean up of background subtraction\n"
            "When the program starts, it begins learning the background.\n"
            "You can toggle background learning on and off by hitting the space bar.\n"
            "Call\n"
            "./segment_objects [video file, else it reads camera 0]\n\n");
}
static void refineSegments(const Mat& img, Mat& mask, Mat& dst)
{
    int niters = 3;
    vector<vector<Point> > contours;
    vector<Vec4i> hierarchy;
    Mat temp;
    dilate(mask, temp, Mat(), Point(-1,-1), niters);
    erode(temp, temp, Mat(), Point(-1,-1), niters*2);
    dilate(temp, temp, Mat(), Point(-1,-1), niters);
    findContours( temp, contours, hierarchy, RETR_CCOMP, CHAIN_APPROX_SIMPLE );
    dst = Mat::zeros(img.size(), CV_8UC3);
    if( contours.size() == 0 )
        return;
    // iterate through all the top-level contours,
    // draw each connected component with its own random color
    int idx = 0, largestComp = 0;
    double maxArea = 0;
    for( ; idx >= 0; idx = hierarchy[idx][0] )
    {
        const vector<Point>& c = contours[idx];
        double area = fabs(contourArea(Mat(c)));
        if( area > maxArea )
        {
            maxArea = area;
            largestComp = idx;
        }
    }
    Scalar color( 0, 0, 255 );
    drawContours( dst, contours, largestComp, color, FILLED, LINE_8, hierarchy );
}
int main(int argc, char** argv)
{
    VideoCapture cap;
    bool update_bg_model = true;
    CommandLineParser parser(argc, argv, "{help h||}{@input||}");
    if (parser.has("help"))
    {
        help();
        return 0;
    }
    string input = parser.get<std::string>("@input");
    if (input.empty())
        cap.open(0);
    else
        cap.open(input);
    if( !cap.isOpened() )
    {
        printf("\nCan not open camera or video file\n");
        return -1;
    }
    Mat tmp_frame, bgmask, out_frame;
    cap >> tmp_frame;
    if(tmp_frame.empty())
    {
        printf("can not read data from the video source\n");
        return -1;
    }
    namedWindow("video", 1);
    namedWindow("segmented", 1);
    Ptr<BackgroundSubtractorMOG2> bgsubtractor=createBackgroundSubtractorMOG2();
    bgsubtractor->setVarThreshold(10);
    for(;;)
    {
        cap >> tmp_frame;
        if( tmp_frame.empty() )
            break;
        bgsubtractor->apply(tmp_frame, bgmask, update_bg_model ? -1 : 0);
        refineSegments(tmp_frame, bgmask, out_frame);
        imshow("video", tmp_frame);
        imshow("segmented", out_frame);
        char keycode = (char)waitKey(30);
        if( keycode == 27 )
            break;
        if( keycode == ' ' )
        {
            update_bg_model = !update_bg_model;
            printf("Learn background is in state = %d\n",update_bg_model);
        }
    }
    return 0;
}

boxFilter()

Blurs an image using the box filter.

The function smooths an image using the kernel:

Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use integral.

非标准化盒式滤波器可用于计算每个像素邻域上的各种积分特性,例如图像导数的协方差矩阵(用于密集光流算法等)。 如果需要在可变大小的窗口上计算像素总和,请使用积分。


filter2D()

Convolves an image with the kernel.

The function applies an arbitrary linear filter to an image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values according to the specified border mode.

The function does actually compute correlation, not the convolution:

That is, the kernel is not mirrored around the anchor point. If you need a real convolution, flip the kernel using flip and set the new anchor to (kernel.cols - anchor.x - 1, kernel.rows - anchor.y - 1).

The function uses the DFT-based algorithm in case of sufficiently large kernels (~11 x 11 or larger) and the direct algorithm for small kernels.

 

Applies a separable linear filter to an image.

The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .

 


GaussianBlur()

Blurs an image using a Gaussian filter.

The function convolves the source image with the specified Gaussian kernel. In-place filtering is supported.

C++实例:

#include <iostream>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
using namespace std;
using namespace cv;
int DELAY_CAPTION = 1500;
int DELAY_BLUR = 100;
int MAX_KERNEL_LENGTH = 31;
Mat src; Mat dst;
char window_name[] = "Smoothing Demo";
int display_caption( const char* caption );
int display_dst( int delay );
int main( int argc, char ** argv )
{
    namedWindow( window_name, WINDOW_AUTOSIZE );
    const char* filename = argc >=2 ? argv[1] : "../data/lena.jpg";
    src = imread( filename, IMREAD_COLOR );
    if(src.empty())
    {
        printf(" Error opening image\n");
        printf(" Usage: ./Smoothing [image_name -- default ../data/lena.jpg] \n");
        return -1;
    }
    if( display_caption( "Original Image" ) != 0 )
    {
        return 0;
    }
    dst = src.clone();
    if( display_dst( DELAY_CAPTION ) != 0 )
    {
        return 0;
    }
    if( display_caption( "Homogeneous Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        blur( src, dst, Size( i, i ), Point(-1,-1) );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Gaussian Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        GaussianBlur( src, dst, Size( i, i ), 0, 0 );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Median Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        medianBlur ( src, dst, i );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    if( display_caption( "Bilateral Blur" ) != 0 )
    {
        return 0;
    }
    for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
    {
        bilateralFilter ( src, dst, i, i*2, i/2 );
        if( display_dst( DELAY_BLUR ) != 0 )
        {
            return 0;
        }
    }
    display_caption( "Done!" );
    return 0;
}
int display_caption( const char* caption )
{
    dst = Mat::zeros( src.size(), src.type() );
    putText( dst, caption,
             Point( src.cols/4, src.rows/2),
             FONT_HERSHEY_COMPLEX, 1, Scalar(255, 255, 255) );
    return display_dst(DELAY_CAPTION);
}
int display_dst( int delay )
{
    imshow( window_name, dst );
    int c = waitKey ( delay );
    if( c >= 0 ) { return -1; }
    return 0;
}

getDerivKernels()

Returns filter coefficients for computing spatial image derivatives.

The function computes and returns the filter coefficients for spatial image derivatives. When ksize=CV_SCHARR, the Scharr 3×3 kernels are generated (see Scharr). Otherwise, Sobel kernels are generated (see Sobel). The filters are normally passed to sepFilter2D or to


getGaborKernel()

Returns Gabor filter coefficients.

For more details about gabor filter equations and parameters, see: Gabor Filter.


getGaussianKernel()

 

Returns Gaussian filter coefficients.

The function computes and returns the ?????×1 matrix of Gaussian filter coefficients:

Two of such generated kernels can be passed to sepFilter2D. Those functions automatically recognize smoothing kernels (a symmetrical kernel with sum of weights equal to 1) and handle them accordingly. You may also use the higher-level GaussianBlur.


Laplacian()

C++实例:

An example using Laplace transformations for edge detection

#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <ctype.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
static void help()
{
    cout <<
            "\nThis program demonstrates Laplace point/edge detection using OpenCV function Laplacian()\n"
            "It captures from the camera of your choice: 0, 1, ... default 0\n"
            "Call:\n"
            "./laplace -c=<camera #, default 0> -p=<index of the frame to be decoded/captured next>\n" << endl;
}
enum {GAUSSIAN, BLUR, MEDIAN};
int sigma = 3;
int smoothType = GAUSSIAN;
int main( int argc, char** argv )
{
    VideoCapture cap;
    cv::CommandLineParser parser(argc, argv, "{ c | 0 | }{ p | | }");
    help();
    if( parser.get<string>("c").size() == 1 && isdigit(parser.get<string>("c")[0]) )
        cap.open(parser.get<int>("c"));
    else
        cap.open(parser.get<string>("c"));
    if( cap.isOpened() )
        cout << "Video " << parser.get<string>("c") <<
            ": width=" << cap.get(CAP_PROP_FRAME_WIDTH) <<
            ", height=" << cap.get(CAP_PROP_FRAME_HEIGHT) <<
            ", nframes=" << cap.get(CAP_PROP_FRAME_COUNT) << endl;
    if( parser.has("p") )
    {
        int pos = parser.get<int>("p");
        if (!parser.check())
        {
            parser.printErrors();
            return -1;
        }
        cout << "seeking to frame #" << pos << endl;
        cap.set(CAP_PROP_POS_FRAMES, pos);
    }
    if( !cap.isOpened() )
    {
        cout << "Could not initialize capturing...\n";
        return -1;
    }
    namedWindow( "Laplacian", 0 );
    createTrackbar( "Sigma", "Laplacian", &sigma, 15, 0 );
    Mat smoothed, laplace, result;
    for(;;)
    {
        Mat frame;
        cap >> frame;
        if( frame.empty() )
            break;
        int ksize = (sigma*5)|1;
        if(smoothType == GAUSSIAN)
            GaussianBlur(frame, smoothed, Size(ksize, ksize), sigma, sigma);
        else if(smoothType == BLUR)
            blur(frame, smoothed, Size(ksize, ksize));
        else
            medianBlur(frame, smoothed, ksize);
        Laplacian(smoothed, laplace, CV_16S, 5);
        convertScaleAbs(laplace, result, (sigma+1)*0.25);
        imshow("Laplacian", result);
        char c = (char)waitKey(30);
        if( c == ' ' )
            smoothType = smoothType == GAUSSIAN ? BLUR : smoothType == BLUR ? MEDIAN : GAUSSIAN;
        if( c == 'q' || c == 'Q' || c == 27 )
            break;
    }
    return 0;
}

medianBlur()

Blurs an image using the median filter.

The function smoothes an image using the median filter with the ?????×????? aperture. Each channel of a multi-channel image is processed independently. In-place operation is supported.

C++实例:

An example using the Hough circle detector


#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
    const char* filename = argc >=2 ? argv[1] : "../data/smarties.png";
    // Loads an image
    Mat src = imread( filename, IMREAD_COLOR );
    // Check if image is loaded fine
    if(src.empty()){
        printf(" Error opening image\n");
        printf(" Program Arguments: [image_name -- default %s] \n", filename);
        return -1;
    }
    Mat gray;
    cvtColor(src, gray, COLOR_BGR2GRAY);
    medianBlur(gray, gray, 5);
    vector<Vec3f> circles;
    HoughCircles(gray, circles, HOUGH_GRADIENT, 1,
                 gray.rows/16,  // change this value to detect circles with different distances to each other
                 100, 30, 1, 30 // change the last two parameters
            // (min_radius & max_radius) to detect larger circles
    );
    for( size_t i = 0; i < circles.size(); i++ )
    {
        Vec3i c = circles[i];
        Point center = Point(c[0], c[1]);
        // circle center
        circle( src, center, 1, Scalar(0,100,100), 3, LINE_AA);
        // circle outline
        int radius = c[2];
        circle( src, center, radius, Scalar(255,0,255), 3, LINE_AA);
    }
    imshow("detected circles", src);
    waitKey();
    return 0;
}

Scharr()

Calculates the first x- or y- image derivative using Scharr operator.

The function computes the first x- or y- spatial image derivative using the Scharr operator. The call

??????(???, ???, ??????, ??, ??, ?????, ?????, ??????????)

is equivalent to

?????(???, ???, ??????, ??, ??, ??_??????, ?????, ?????, ??????????).

C++实例:

This program demonstrates usage of the Canny edge detector

#include "opencv2/core/utility.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <stdio.h>
using namespace cv;
using namespace std;
int edgeThresh = 1;
int edgeThreshScharr=1;
Mat image, gray, blurImage, edge1, edge2, cedge;
const char* window_name1 = "Edge map : Canny default (Sobel gradient)";
const char* window_name2 = "Edge map : Canny with custom gradient (Scharr)";
// define a trackbar callback
static void onTrackbar(int, void*)
{
    blur(gray, blurImage, Size(3,3));
    // Run the edge detector on grayscale
    Canny(blurImage, edge1, edgeThresh, edgeThresh*3, 3);
    cedge = Scalar::all(0);
    image.copyTo(cedge, edge1);
    imshow(window_name1, cedge);
    Mat dx,dy;
    Scharr(blurImage,dx,CV_16S,1,0);
    Scharr(blurImage,dy,CV_16S,0,1);
    Canny( dx,dy, edge2, edgeThreshScharr, edgeThreshScharr*3 );
    cedge = Scalar::all(0);
    image.copyTo(cedge, edge2);
    imshow(window_name2, cedge);
}
static void help()
{
    printf("\nThis sample demonstrates Canny edge detection\n"
           "Call:\n"
           "    /.edge [image_name -- Default is ../data/fruits.jpg]\n\n");
}
const char* keys =
{
    "{help h||}{@image |../data/fruits.jpg|input image name}"
};
int main( int argc, const char** argv )
{
    help();
    CommandLineParser parser(argc, argv, keys);
    string filename = parser.get<string>(0);
    image = imread(filename, IMREAD_COLOR);
    if(image.empty())
    {
        printf("Cannot read image file: %s\n", filename.c_str());
        help();
        return -1;
    }
    cedge.create(image.size(), image.type());
    cvtColor(image, gray, COLOR_BGR2GRAY);
    // Create a window
    namedWindow(window_name1, 1);
    namedWindow(window_name2, 1);
    // create a toolbar
    createTrackbar("Canny threshold default", window_name1, &edgeThresh, 100, onTrackbar);
    createTrackbar("Canny threshold Scharr", window_name2, &edgeThreshScharr, 400, onTrackbar);
    // Show the image
    onTrackbar(0, 0);
    // Wait for a key stroke; the same function arranges events processing
    waitKey(0);
    return 0;
}

 


Sobel算子

Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.

In all cases except one, the ?????×????? separable kernel is used to calculate the derivative. When ????? = ?, the 3×1 or 1×3 kernel is used (that is, no Gaussian smoothing is done). ksize = 1 can only be used for the first or the second x- or y- derivatives.

C++实例:

Sample code using Sobel and/or Scharr OpenCV functions to make a simple Edge Detector

#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
  cv::CommandLineParser parser(argc, argv,
                               "{@input   |../data/lena.jpg|input image}"
                               "{ksize   k|1|ksize (hit 'K' to increase its value)}"
                               "{scale   s|1|scale (hit 'S' to increase its value)}"
                               "{delta   d|0|delta (hit 'D' to increase its value)}"
                               "{help    h|false|show help message}");
  cout << "The sample uses Sobel or Scharr OpenCV functions for edge detection\n\n";
  parser.printMessage();
  cout << "\nPress 'ESC' to exit program.\nPress 'R' to reset values ( ksize will be -1 equal to Scharr function )";
  // First we declare the variables we are going to use
  Mat image,src, src_gray;
  Mat grad;
  const String window_name = "Sobel Demo - Simple Edge Detector";
  int ksize = parser.get<int>("ksize");
  int scale = parser.get<int>("scale");
  int delta = parser.get<int>("delta");
  int ddepth = CV_16S;
  String imageName = parser.get<String>("@input");
  // As usual we load our source image (src)
  image = imread( imageName, IMREAD_COLOR ); // Load an image
  // Check if image is loaded fine
  if( image.empty() )
  {
    printf("Error opening image: %s\n", imageName.c_str());
    return 1;
  }
  for (;;)
  {
    // Remove noise by blurring with a Gaussian filter ( kernel size = 3 )
    GaussianBlur(image, src, Size(3, 3), 0, 0, BORDER_DEFAULT);
    // Convert the image to grayscale
    cvtColor(src, src_gray, COLOR_BGR2GRAY);
    Mat grad_x, grad_y;
    Mat abs_grad_x, abs_grad_y;
    Sobel(src_gray, grad_x, ddepth, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
    Sobel(src_gray, grad_y, ddepth, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
    // converting back to CV_8U
    convertScaleAbs(grad_x, abs_grad_x);
    convertScaleAbs(grad_y, abs_grad_y);
    addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
    imshow(window_name, grad);
    char key = (char)waitKey(0);
    if(key == 27)
    {
      return 0;
    }
    if (key == 'k' || key == 'K')
    {
      ksize = ksize < 30 ? ksize+2 : -1;
    }
    if (key == 's' || key == 'S')
    {
      scale++;
    }
    if (key == 'd' || key == 'D')
    {
      delta++;
    }
    if (key == 'r' || key == 'R')
    {
      scale =  1;
      ksize = -1;
      delta =  0;
    }
  }
  return 0;
}


函数列表

void  cv::bilateralFilter (InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace, int borderType=BORDER_DEFAULT)
  Applies the bilateral filter to an image. 
 
void  cv::blur (InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT)
  Blurs an image using the normalized box filter. 
 
void  cv::boxFilter (InputArray src, OutputArray dst, int ddepth, Size ksize, Point anchor=Point(-1,-1), bool normalize=true, int borderType=BORDER_DEFAULT)
  Blurs an image using the box filter. 
 
void  cv::buildPyramid (InputArray src, OutputArrayOfArrays dst, int maxlevel, int borderType=BORDER_DEFAULT)
  Constructs the Gaussian pyramid for an image. 
 
void  cv::dilate (InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar &borderValue=morphologyDefaultBorderValue())
  Dilates an image by using a specific structuring element. 
 
void  cv::erode (InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar &borderValue=morphologyDefaultBorderValue())
  Erodes an image by using a specific structuring element. 
 
void  cv::filter2D (InputArray src, OutputArray dst, int ddepth, InputArray kernel, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT)
  Convolves an image with the kernel. 
 
void  cv::GaussianBlur (InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0, int borderType=BORDER_DEFAULT)
  Blurs an image using a Gaussian filter. 
 
void  cv::getDerivKernels (OutputArray kx, OutputArray ky, int dx, int dy, int ksize, bool normalize=false, int ktype=CV_32F)
  Returns filter coefficients for computing spatial image derivatives. 
 
Mat  cv::getGaborKernel (Size ksize, double sigma, double theta, double lambd, double gamma, double psi=CV_PI *0.5, int ktype=CV_64F)
  Returns Gabor filter coefficients.
 
Mat  cv::getGaussianKernel (int ksize, double sigma, int ktype=CV_64F)
  Returns Gaussian filter coefficients. 
 
Mat  cv::getStructuringElement (int shape, Size ksize, Point anchor=Point(-1,-1))
  Returns a structuring element of the specified size and shape for morphological operations. 
 
void  cv::Laplacian (InputArray src, OutputArray dst, int ddepth, int ksize=1, double scale=1, double delta=0, int borderType=BORDER_DEFAULT)
  Calculates the Laplacian of an image. 
 
void  cv::medianBlur (InputArray src, OutputArray dst, int ksize)
  Blurs an image using the median filter. 
 
static Scalar  cv::morphologyDefaultBorderValue ()
  returns "magic" border value for erosion and dilation. It is automatically transformed to Scalar::all(-DBL_MAX) for dilation. 
 
void  cv::morphologyEx (InputArray src, OutputArray dst, int op, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar &borderValue=morphologyDefaultBorderValue())
  Performs advanced morphological transformations. More...
 
void  cv::pyrDown (InputArray src, OutputArray dst, const Size &dstsize=Size(), int borderType=BORDER_DEFAULT)
  Blurs an image and downsamples it. 
 
void  cv::pyrMeanShiftFiltering (InputArray src, OutputArray dst, double sp, double sr, int maxLevel=1, TermCriteriatermcrit=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1))
  Performs initial step of meanshift segmentation of an image. 
 
void  cv::pyrUp (InputArray src, OutputArray dst, const Size &dstsize=Size(), int borderType=BORDER_DEFAULT)
  Upsamples an image and then blurs it. More...
 
void  cv::Scharr (InputArray src, OutputArray dst, int ddepth, int dx, int dy, double scale=1, double delta=0, int borderType=BORDER_DEFAULT)
  Calculates the first x- or y- image derivative using Scharr operator. More...
 
void  cv::sepFilter2D (InputArray src, OutputArray dst, int ddepth, InputArray kernelX, InputArray kernelY, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT)
  Applies a separable linear filter to an image. 
 
void  cv::Sobel (InputArray src, OutputArray dst, int ddepth, int dx, int dy, int ksize=3, double scale=1, double delta=0, int borderType=BORDER_DEFAULT)
  Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator. 
 
void  cv::spatialGradient (InputArray src, OutputArray dx, OutputArray dy, int ksize=3, int borderType=BORDER_DEFAULT)
  Calculates the first order image derivative in both x and y using a Sobel operator. 
 
void  cv::sqrBoxFilter (InputArray _src, OutputArray _dst, int ddepth, Size ksize, Point anchor=Point(-1, -1), bool normalize=true, int borderType=BORDER_DEFAULT)
  Calculates the normalized sum of squares of the pixel values overlapping the filter. 

原文链接:

https://docs.opencv.org/3.4.3/d4/d86/group__imgproc__filter.html#gae84c92d248183bd92fa713ce51cc3599

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2017-10-28 18:45:23 qq_30356613 阅读数 15540

滤波器作为图像处理课程的重要内容,大致可分为两类,空域滤波器和频率域滤波器。本文主要介绍常用的四种滤波器:中值滤波器、均值滤波器、高斯滤波器、双边滤波器,并基于opencv做出实现。空域的滤波器一般可以通过模板对原图像进行卷积进行,卷积的相关知识请自行学习。

理论知识:

线性滤波器表达公式:,其中均值滤波器和高斯滤波器属于线性滤波器,首先看这两种滤波器

均值滤波器:

模板:

从待处理图像首元素开始用模板对原始图像进行卷积,均值滤波直观地理解就是用相邻元素灰度值的平均值代替该元素的灰度值。

高斯滤波器:

模板:通过高斯内核函数产生的

高斯内核函数:

例如3*3的高斯内核模板:


中值滤波:同样是空间域的滤波,主题思想是取相邻像素的点,然后对相邻像素的点进行排序,取中点的灰度值作为该像素点的灰度值。

双边滤波:


C++代码实现:

static void exchange(int& a, int& b)
{	
	int t = 0;
	t = a;
	a = b;
	b = t;
}

static void bubble_sort(int* K, int lenth)
{
	for (int i = 0; i < lenth; i++)
		for (int j = i + 1; j < lenth; j++)
		{
			if (K[i]>K[j])
				exchange(K[i], K[j]);
		}
}
///产生二维的高斯内核
static cv::Mat generate_gassian_kernel(double u, double sigma, cv::Size size)
{
	int width = size.width;
	int height = size.height;
	cv::Mat gassian_kernel(cv::Size(width, height), CV_64FC1);
	double sum = 0;
	double sum_sum = 0;
	for (int i = 0; i < width; i++)
		for (int j = 0; j < height; j++)
		{
			sum = 1.0 / 2.0 / CV_PI / sigma / sigma * exp(-1.0 * ((i - width / 2)*(i - width / 2) + (j - width / 2)*(j - width / 2)) / 2.0 / sigma / sigma);
			sum_sum += sum;
			gassian_kernel.ptr<double>(i)[j] = sum;
		}
	for (int i = 0; i < width; i++)
		for (int j = 0; j < height; j++)
		{
			gassian_kernel.ptr<double>(i)[j] /= sum_sum;
		}
	return gassian_kernel;
}
///均值滤波
void lmt_main_blur(cv::Mat& img_in, cv::Mat& img_out, int kernel_size)
{
	img_out = img_in.clone();
	cv::Mat mat1;
	cv::copyMakeBorder(img_in, mat1, kernel_size, kernel_size, kernel_size, kernel_size, cv::BORDER_REPLICATE);

	int cols = mat1.cols;
	int rows = mat1.rows;
	int channels = img_out.channels();
	const uchar* const pt = mat1.ptr<uchar>(0);
	uchar* pt_out = img_out.ptr<uchar>(0);

	for (int i = kernel_size; i < rows - kernel_size; i++)
	{
		for (int j = kernel_size; j < cols - kernel_size; j++)
		{
			if (channels == 1)
			{
				long long int sum_pixel = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						sum_pixel += pt[(i + m)*cols + (j + n)];
					}
				img_out.ptr<uchar>(i - kernel_size)[j - kernel_size] = (double)sum_pixel / (kernel_size*kernel_size * 4);
			}
			else if (channels == 3)
			{
				long long int sum_pixel = 0;
				long long int sum_pixel1 = 0;
				long long int sum_pixel2 = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						sum_pixel += pt[((i + m)*cols + (j + n))*channels + 0];
						sum_pixel1 += pt[((i + m)*cols + (j + n))*channels + 1];
						sum_pixel2 += pt[((i + m)*cols + (j + n))*channels + 2];
					}
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 0] = (double)sum_pixel / (double)(kernel_size*kernel_size * 4);
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 1] = (double)sum_pixel1 / (double)(kernel_size*kernel_size * 4);
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 2] = (double)sum_pixel2 / (double)(kernel_size*kernel_size * 4);
			}
		}
	}

}
///中值滤波
void lmt_median_blur(cv::Mat& img_in, cv::Mat& img_out, int kernel_size)
{
	img_out = img_in.clone();
	cv::Mat mat1;
	cv::copyMakeBorder(img_in, mat1, kernel_size, kernel_size, kernel_size, kernel_size, cv::BORDER_REPLICATE);

	int cols = mat1.cols;
	int rows = mat1.rows;
	int channels = img_out.channels();

	cv::Mat mat[3];
	cv::Mat mat_out[3];
	cv::split(mat1, mat);
	cv::split(img_out, mat_out);
	for (int k = 0; k < 3; k++)
	{
		const uchar* const pt = mat[k].ptr<uchar>(0);
		uchar* pt_out = mat_out[k].ptr<uchar>(0);
		for (int i = kernel_size; i < rows - kernel_size; i++)
		{
			for (int j = kernel_size; j < cols - kernel_size; j++)
			{
				long long int sum_pixel = 0;
				int* K = new int[kernel_size*kernel_size * 4];
				int ker_num = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						K[ker_num] = pt[(i + m)*cols + (j + n)];
						ker_num++;
					}
				bubble_sort(K, ker_num);
				mat_out[k].ptr<uchar>(i - kernel_size)[j - kernel_size] = K[ker_num / 2];
			}
		}
	}
	cv::merge(mat_out, 3, img_out);
}
///高斯滤波
void lmt_gaussian_blur(cv::Mat& img_src, cv::Mat& img_dst, cv::Size kernel_size)
{
	img_dst = cv::Mat(cv::Size(img_src.cols, img_src.rows), img_src.type());
	int cols = img_src.cols;
	int rows = img_src.rows;
	int channels = img_src.channels();
	cv::Mat gassian_kernel = generate_gassian_kernel(0, 1, kernel_size);
	int width = kernel_size.width / 2;
	int height = kernel_size.height / 2;
	for (int i = height; i < rows - height; i++)
	{
		for (int j = width; j < cols - width; j++)
		{
			for (int k = 0; k < channels; k++)
			{
				double sum = 0.0;
				for (int m = -height; m <= height; m++)
				{
					for (int n = -width; n <= width; n++)
					{
						sum += (double)(img_src.ptr<uchar>(i + m)[(j + n)*channels + k]) * gassian_kernel.ptr<double>(height + m)[width + n];
					}
				}
				if (sum > 255.0)
					sum = 255;
				if (sum < 0.0)
					sum = 0;
				img_dst.ptr<uchar>(i)[j*channels + k] = (uchar)sum;
			}
		}
	}

	
}
///双边滤波
void lmt_bilateral_filter(cv::Mat& img_in, cv::Mat& img_out, const int r, double sigma_d, double sigma_r)
{
	int i, j, m, n, k;
	int nx = img_in.cols, ny = img_in.rows, m_nChannels = img_in.channels();
	const int w_filter = 2 * r + 1; // 滤波器边长  

	double gaussian_d_coeff = -0.5 / (sigma_d * sigma_d);
	double gaussian_r_coeff = -0.5 / (sigma_r * sigma_r);
	double  **d_metrix = new double *[w_filter];
	for (int i = 0; i < w_filter; ++i)
		d_metrix[i] = new double[w_filter];
	
	double r_metrix[256];  // similarity weight  
	img_out = cv::Mat(img_in.size(),img_in.type());
	uchar* m_imgData = img_in.ptr<uchar>(0);
	uchar* m_img_outData = img_out.ptr<uchar>(0);
	// copy the original image  
	double* img_tmp = new double[m_nChannels * nx * ny];
	for (i = 0; i < ny; i++)
		for (j = 0; j < nx; j++)
			for (k = 0; k < m_nChannels; k++)
			{
				img_tmp[i * m_nChannels * nx + m_nChannels * j + k] = m_imgData[i * m_nChannels * nx + m_nChannels * j + k];
			}

	// compute spatial weight  
	for (i = -r; i <= r; i++)
		for (j = -r; j <= r; j++)
		{
			int x = j + r;
			int y = i + r;

			d_metrix[y][x] = exp((i * i + j * j) * gaussian_d_coeff);
		}

	// compute similarity weight  
	for (i = 0; i < 256; i++)
	{
		r_metrix[i] = exp(i * i * gaussian_r_coeff);
	}

	// bilateral filter  
	for (i = 0; i < ny; i++)
		for (j = 0; j < nx; j++)
		{
			for (k = 0; k < m_nChannels; k++)
			{
				double weight_sum, pixcel_sum;
				weight_sum = pixcel_sum = 0.0;

				for (m = -r; m <= r; m++)
					for (n = -r; n <= r; n++)
					{
						if (m*m + n*n > r*r) continue;

						int x_tmp = j + n;
						int y_tmp = i + m;

						x_tmp = x_tmp < 0 ? 0 : x_tmp;
						x_tmp = x_tmp > nx - 1 ? nx - 1 : x_tmp;   // 边界处理,replicate  
						y_tmp = y_tmp < 0 ? 0 : y_tmp;
						y_tmp = y_tmp > ny - 1 ? ny - 1 : y_tmp;

						int pixcel_dif = (int)abs(img_tmp[y_tmp * m_nChannels * nx + m_nChannels * x_tmp + k] - img_tmp[i * m_nChannels * nx + m_nChannels * j + k]);
						double weight_tmp = d_metrix[m + r][n + r] * r_metrix[pixcel_dif];  // 复合权重  

						pixcel_sum += img_tmp[y_tmp * m_nChannels * nx + m_nChannels * x_tmp + k] * weight_tmp;
						weight_sum += weight_tmp;
					}

				pixcel_sum = pixcel_sum / weight_sum;
				m_img_outData[i * m_nChannels * nx + m_nChannels * j + k] = (uchar)pixcel_sum;

			} // 一个通道  

		} // END ALL LOOP  
	for (i = 0; i < w_filter; i++)
		delete[] d_metrix[i];
	delete[] d_metrix;
}

Opencv API函数实现:

opencv相关函数简介:

双边滤波函数:bilateralFilter(InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace,int borderType=BORDER_DEFAULT )

   src待滤波图像

   dst滤波后图像

   d滤波器半径

   sigmaColor滤波器值域的sigma

   sigmaSpace滤波器空间域的sigma

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

均值滤波函数:blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), intborderType=BORDER_DEFAULT );

   src待滤波图像

   dst滤波后图像

   ksize 均值滤波器的大小

   anchor均值滤波器的锚点也就是模板移动点

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

高斯滤波函数:GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0,int borderType=BORDER_DEFAULT );

   src待滤波图像

   dst滤波后图像

   ksize 高斯滤波器的大小

   sigmaX 高斯滤波器的x方向的滤波器高斯sigma

   sigmaY 高斯滤波器的y方向的滤波器高斯sigma

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

中值滤波函数:medianBlur(InputArray src, OutputArray dst, int ksize );

    src待滤波图像

    dst滤波后图像

    ksize 中值滤波器的大小

函数演示:

void bilateral_filter_show(void)
{
	cv::Mat mat1 = cv::imread("F:\\CVlibrary\\obama.jpg", CV_LOAD_IMAGE_GRAYSCALE); //灰度图加载进来,BGR->HSV 然后取H参数
	if (mat1.empty())
		return;
	cv::imshow("原图像", mat1); 
	cv::Mat src = cv::imread("F:\\CVlibrary\\obama.jpg");
	cv::imshow("原始彩色图像", src);
	std::cout << "channel = " << mat1.channels() << std::endl;
	
	cv::Mat mat3;
	cv::bilateralFilter(src, mat3, 5, 50, 50,cv::BORDER_DEFAULT);
	cv::imshow("opencv给出的双边滤波器", mat3);
	cv::Mat mat4;
	cv::blur(src, mat4, cv::Size(3, 3));
	cv::imshow("均值滤波", mat4);
	cv::Mat mat5;
	cv::GaussianBlur(src, mat5, cv::Size(5, 5), 1,1);
	cv::imshow("高斯滤波器", mat5);
	cv::Mat mat6;
	cv::medianBlur(src, mat6, 3);
	cv::imshow("中值滤波", mat6); 
	cv::Mat mat7;
	lmt_gaussian_blur(src, mat7, cv::Size(5, 5));
	cv::imshow("my gaussian image",mat7);

	cv::waitKey(0);
}







2020-01-02 22:28:56 jackzhang11 阅读数 171

在图像处理、计算机视觉领域,我们有时需要对原始图像进行预处理。图像滤波是一种比较常用的方法,通过滤波操作,就可以突出一些特征或者去除图像中不需要的成分。通过选取不同的滤波器,在原始图像上进行滑动和卷积,借助相邻的像素值就可以决定该像素最后的输出。最常见的算子分为两类,一个是图像平滑去噪功能、一个是边缘检测功能,下文中会对这两类进行展开。

平滑滤波器

1. 高斯滤波

高斯滤波器是一种可以使图像平滑的滤波器,用于去除噪声。

高斯滤波器将中心像素周围的像素按照高斯分布加权平均进行平滑化。这样的二维权值通常被称为卷积核(kernel)或者滤波器(filter)。

但是,由于图像的长宽可能不是滤波器大小的整数倍,同时我们希望输出图像的维度与输入图像一致,因此我们需要在图像的边缘补00,具体补几个00视滤波器与图像的大小关系确定,这种方法称作Zero Padding。同时,权值gg(卷积核)要进行归一化操作(g=1\sum g = 1)。

按下面的高斯分布公式计算权值: g(x,y,σ)=12πσ2 ex2+y22σ2 g(x,y,\sigma)=\frac{1}{2\pi\sigma^2}\ e^{-\frac{x^2+y^2}{2\sigma^2}}

其中 xxyy 的坐标是以当前滤波器的中心点为基准。例如中心点右上方各1格的坐标对,是(1,-1)。

标准差σ=1.3\sigma=1.388-近邻高斯滤波器近似如下: K=116 [121242121] K=\frac{1}{16}\ \left[ \begin{matrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 1 \end{matrix} \right]

2. 中值滤波

中值滤波器也是一种可以使图像平滑的滤波器,一定程度上可以去除图像的噪声,同时图像的细节也会变得模糊。这种滤波器是提取出滤波器范围内(在这里假设是3×33\times3)像素点的中值。为了保证输出图像的大小和输入一样,需要采用Zero Padding。

3. 均值滤波

与中值滤波相似,均值滤波也是用于图像降噪的。唯一与中值滤波不同的是,均值滤波对于滤波器范围内的像素点,计算他们的均值作为输出。

边缘检测滤波器

1. Sobel滤波器

Sobel滤波器可以提取特定方向(纵向或横向)的边缘,滤波器按下式定义:

水平Sobel算子: K=[121000121] K=\left[ \begin{matrix} 1&2&1\\ 0&0&0\\ -1&-2&-1 \end{matrix} \right] 竖直Sobel算子: K=[101202101] K=\left[ \begin{matrix} 1&0&-1\\2&0&-2\\ 1&0&-1 \end{matrix} \right]

Sobel算子可以近似的计算出图像相邻像素之间的梯度。假设滤波器现在滑动到背景部分,那么滤波器卷积计算得到的值就非常小;反之,如果滤波器在背景和前景分界出,那么滤波器滑过得到的卷积数值就会比较大。因此可以较好的提取出图像的边缘信息。

2. Prewitt滤波器

Prewitt滤波器也是用于边缘检测的一种滤波器,使用下式定义:

水平Prewitt算子: K=[111000111] K=\left[ \begin{matrix} -1&-1&-1\\ 0&0&0\\ 1&1&1 \end{matrix} \right] 竖直Prewitt算子: K=[101101101] K=\left[ \begin{matrix} -1&0&-1\\ -1&0&1\\ -1&0&1 \end{matrix} \right]

Prewitt算子与Sobel算子不同的是,Sobel算子考虑了权值的因素,即在中心点正上方或正下方(正左和正右)的权值为2,因为这个像素点离中心更近,而离中心远一点的斜侧方的权值为1;而Prewitt中没有这种设定。总的来说,Sobel算是对Prewitt的一种改进,效果也自然更好一点。

3. Laplacian滤波器

有别于Sobel算子和Prewitt算子这两类一阶微分滤波器,Laplacian滤波器是对图像亮度进行二次微分从而检测边缘的滤波器。由于数字图像是离散的,xx方向和yy方向的一次微分分别按照以下式子计算: Ix(x,y)=I(x+1,y)I(x,y)(x+1)x=I(x+1,y)I(x,y)  I_x(x,y)=\frac{I(x+1,y)-I(x,y)}{(x+1)-x}=I(x+1,y)-I(x,y)\ Iy(x,y)=I(x,y+1)I(x,y)(y+1)y=I(x,y+1)I(x,y)I_y(x,y) =\frac{I(x, y+1) - I(x,y)}{(y+1)-y}= I(x, y+1) - I(x,y) 所以二次微分按照以下式子计算: Ixx(x,y) =Ix(x,y)Ix(x1,y)(x+1)x=Ix(x,y)Ix(x1,y)=[I(x+1,y)I(x,y)][I(x,y)I(x1,y)]=I(x+1,y)2 I(x,y)+I(x1,y) I_{xx}(x,y) \ =\frac{I_x(x,y) - I_x(x-1,y)}{(x+1)-x}=I_x(x,y) - I_x(x-1,y)=[I(x+1, y) - I(x,y)] - [I(x, y) - I(x-1,y)] =I(x+1,y) - 2\ I(x,y) + I(x-1,y) 同理: Iyy(x,y)=I(x,y+1)2 I(x,y)+I(x,y1) I_{yy}(x,y)=I(x,y+1)-2\ I(x,y)+I(x,y-1) 因此,Laplacian 表达式如下: 2 I(x,y) =Ixx(x,y)+Iyy(x,y) =I(x1,y)+I(x,y1)4I(x,y)+I(x+1,y)+I(x,y+1)\nabla^2\ I(x,y)\ =I_{xx}(x,y)+I_{yy}(x,y)\ =I(x-1,y) + I(x,y-1) - 4 * I(x,y) + I(x+1,y) + I(x,y+1) 把这个式子表示为卷积核是下面这样的: K=[010141010] K= \left[ \begin{matrix} 0&1&0\\ 1&-4&1\\ 0&1&0 \end{matrix} \right]

参考:https://github.com/gzr2017/ImageProcessing100Wen

2018-08-30 10:19:31 csdnforyou 阅读数 12457

1. 滤波器介绍

滤波器作为图像处理课程的重要内容,大致可分为两类,空域滤波器和频率域滤波器。本文主要介绍常用的四种滤波器:中值滤波器、均值滤波器、高斯滤波器、双边滤波器,并基于opencv做出实现。空域的滤波器一般可以通过模板对原图像进行卷积。

注意:空域滤波器和频率域滤波器对比

1)空间域指图像本身,空域变换直接对图像中的像素进行操作。

2)图像变换是将图像从空间域变换到某变换域(如 傅立叶变换中的频率域)的数学变换,在变换域 中进行处理,然后通过反变换把处理结果返回到空间域。

3)图像在空域上具有很强的相关性,借助于正交变 换可使在空域的复杂计算转换到频域后得到简化

4)借助于频域特性的分析,将更有利于获得图像的 各种特性和进行特殊处理

 

2、理论知识:

图像的空域滤波无非两种情况,线性滤波和非线性滤波。

      滤波的意思就是对原图像的每个像素周围一定范围内的像素进行运算,运算的范围就称为掩膜。而运算就分两种了,如果运算只是对各像素灰度值进行简单处理(如乘一个权值)最后求和,就称为线性滤波;而如果对像素灰度值的运算比较复杂,而不是最后求和的简单运算,则是非线性滤波;如求一个像素周围3x3范围内最大值、最小值、中值、均值等操作都不是简单的加权,都属于非线性滤波。

常见的线性滤波有:均值滤波、高斯滤波、盒子滤波、拉普拉斯滤波等等,通常线性滤波器之间只是模版系数不同。

非线性滤波利用原始图像跟模版之间的一种逻辑关系得到结果,如最值滤波器,中值滤波器和双边滤波器等。

1、线性滤波

线性滤波器表达公式:,其中均值滤波器和高斯滤波器属于线性滤波器,首先看这两种滤波器

均值滤波器:

模板:

 

从待处理图像首元素开始用模板对原始图像进行卷积,均值滤波直观地理解就是用相邻元素灰度值的平均值代替该元素的灰度值。

高斯滤波器:

高斯滤波一般针对的是高斯噪声,能够很好的抑制图像输入时随机引入的噪声,将像素点跟邻域像素看作是一种高斯分布的关系,它的操作是将图像和一个高斯核进行卷积操作: 

 

模板:通过高斯内核函数产生的

高斯内核函数:

例如3*3的高斯内核模板:

中值滤波:同样是空间域的滤波,主题思想是取相邻像素的点,然后对相邻像素的点进行排序,取中点的灰度值作为该像素点的灰度值。

统计排序滤波器,对椒盐噪声有很好的抑制

详细请参考:数字图像处理之椒盐噪声和中值滤波

中值滤波将窗口函数里面的所有像素进行排序取得中位数来代表该窗口中心的像素值,对椒盐噪声和脉冲噪声的抑制效果特别好,同时又能保留边缘细节,用公式表示是: 

双边滤波(Bilateral filter)也是一种非线性的滤波方法,是结合图像的空间邻近度和像素值相似度的一种折衷处理,同时考虑空域信息和灰度相似性,达到保边去噪的目的,具有简单,非迭代、局部的特点,它比高斯滤波多了一个高斯方差σd,用公式表示就是: 

w(x,y)为加权系数,取决于定义域核和值域核的乘积。

注意:

1)均值模糊无法克服边缘像素信息丢失的缺陷,原因是均值滤波是基于平均权重的。

2)高斯模糊部分克服了该缺陷,但无法完全避免,因为没有考虑像素值的不同。

3)高斯双边模糊-是边缘保留额滤波方法,避免了边缘信息丢失,保留了图像轮廓不变。

3. 实验

filter results

 

结论:从滤波的结果可以看出各种滤波算法对图像的作用非常不同,有些变化非常大,有些甚至跟原图一样。在实际应用时,应根据噪声的特点、期望的图像和边缘特征等来选择合适的滤波器,这样才能发挥图像滤波的最大优点。

 

 

4. C++实现

4.1均值滤波

static void exchange(int& a, int& b)
{    
    int t = 0;
    t = a;
    a = b;
    b = t;
}
 
static void bubble_sort(int* K, int lenth)
{
    for (int i = 0; i < lenth; i++)
        for (int j = i + 1; j < lenth; j++)
        {
            if (K[i]>K[j])
                exchange(K[i], K[j]);
        }
}
///产生二维的高斯内核
static cv::Mat generate_gassian_kernel(double u, double sigma, cv::Size size)
{
    int width = size.width;
    int height = size.height;
    cv::Mat gassian_kernel(cv::Size(width, height), CV_64FC1);
    double sum = 0;
    double sum_sum = 0;
    for (int i = 0; i < width; i++)
        for (int j = 0; j < height; j++)
        {
            sum = 1.0 / 2.0 / CV_PI / sigma / sigma * exp(-1.0 * ((i - width / 2)*(i - width / 2) + (j - width / 2)*(j - width / 2)) / 2.0 / sigma / sigma);
            sum_sum += sum;
            gassian_kernel.ptr<double>(i)[j] = sum;
        }
    for (int i = 0; i < width; i++)
        for (int j = 0; j < height; j++)
        {
            gassian_kernel.ptr<double>(i)[j] /= sum_sum;
        }
    return gassian_kernel;
}
///均值滤波
void lmt_main_blur(cv::Mat& img_in, cv::Mat& img_out, int kernel_size)
{
    img_out = img_in.clone();
    cv::Mat mat1;
    cv::copyMakeBorder(img_in, mat1, kernel_size, kernel_size, kernel_size, kernel_size, cv::BORDER_REPLICATE);
 
    int cols = mat1.cols;
    int rows = mat1.rows;
    int channels = img_out.channels();
    const uchar* const pt = mat1.ptr<uchar>(0);
    uchar* pt_out = img_out.ptr<uchar>(0);
 
    for (int i = kernel_size; i < rows - kernel_size; i++)
    {
        for (int j = kernel_size; j < cols - kernel_size; j++)
        {
            if (channels == 1)
            {
                long long int sum_pixel = 0;
                for (int m = -1 * kernel_size; m < kernel_size; m++)
                    for (int n = -1 * kernel_size; n < kernel_size; n++)
                    {
                        sum_pixel += pt[(i + m)*cols + (j + n)];
                    }
                img_out.ptr<uchar>(i - kernel_size)[j - kernel_size] = (double)sum_pixel / (kernel_size*kernel_size * 4);
            }
            else if (channels == 3)
            {
                long long int sum_pixel = 0;
                long long int sum_pixel1 = 0;
                long long int sum_pixel2 = 0;
                for (int m = -1 * kernel_size; m < kernel_size; m++)
                    for (int n = -1 * kernel_size; n < kernel_size; n++)
                    {
                        sum_pixel += pt[((i + m)*cols + (j + n))*channels + 0];
                        sum_pixel1 += pt[((i + m)*cols + (j + n))*channels + 1];
                        sum_pixel2 += pt[((i + m)*cols + (j + n))*channels + 2];
                    }
                img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 0] = (double)sum_pixel / (double)(kernel_size*kernel_size * 4);
                img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 1] = (double)sum_pixel1 / (double)(kernel_size*kernel_size * 4);
                img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 2] = (double)sum_pixel2 / (double)(kernel_size*kernel_size * 4);
            }
        }
    }
 
}
///中值滤波
void lmt_median_blur(cv::Mat& img_in, cv::Mat& img_out, int kernel_size)
{
    img_out = img_in.clone();
    cv::Mat mat1;
    cv::copyMakeBorder(img_in, mat1, kernel_size, kernel_size, kernel_size, kernel_size, cv::BORDER_REPLICATE);
 
    int cols = mat1.cols;
    int rows = mat1.rows;
    int channels = img_out.channels();
 
    cv::Mat mat[3];
    cv::Mat mat_out[3];
    cv::split(mat1, mat);
    cv::split(img_out, mat_out);
    for (int k = 0; k < 3; k++)
    {
        const uchar* const pt = mat[k].ptr<uchar>(0);
        uchar* pt_out = mat_out[k].ptr<uchar>(0);
        for (int i = kernel_size; i < rows - kernel_size; i++)
        {
            for (int j = kernel_size; j < cols - kernel_size; j++)
            {
                long long int sum_pixel = 0;
                int* K = new int[kernel_size*kernel_size * 4];
                int ker_num = 0;
                for (int m = -1 * kernel_size; m < kernel_size; m++)
                    for (int n = -1 * kernel_size; n < kernel_size; n++)
                    {
                        K[ker_num] = pt[(i + m)*cols + (j + n)];
                        ker_num++;
                    }
                bubble_sort(K, ker_num);
                mat_out[k].ptr<uchar>(i - kernel_size)[j - kernel_size] = K[ker_num / 2];
            }
        }
    }
    cv::merge(mat_out, 3, img_out);
}
///高斯滤波
void lmt_gaussian_blur(cv::Mat& img_src, cv::Mat& img_dst, cv::Size kernel_size)
{
    img_dst = cv::Mat(cv::Size(img_src.cols, img_src.rows), img_src.type());
    int cols = img_src.cols;
    int rows = img_src.rows;
    int channels = img_src.channels();
    cv::Mat gassian_kernel = generate_gassian_kernel(0, 1, kernel_size);
    int width = kernel_size.width / 2;
    int height = kernel_size.height / 2;
    for (int i = height; i < rows - height; i++)
    {
        for (int j = width; j < cols - width; j++)
        {
            for (int k = 0; k < channels; k++)
            {
                double sum = 0.0;
                for (int m = -height; m <= height; m++)
                {
                    for (int n = -width; n <= width; n++)
                    {
                        sum += (double)(img_src.ptr<uchar>(i + m)[(j + n)*channels + k]) * gassian_kernel.ptr<double>(height + m)[width + n];
                    }
                }
                if (sum > 255.0)
                    sum = 255;
                if (sum < 0.0)
                    sum = 0;
                img_dst.ptr<uchar>(i)[j*channels + k] = (uchar)sum;
            }
        }
    }
 
    
}
///双边滤波
void lmt_bilateral_filter(cv::Mat& img_in, cv::Mat& img_out, const int r, double sigma_d, double sigma_r)
{
    int i, j, m, n, k;
    int nx = img_in.cols, ny = img_in.rows, m_nChannels = img_in.channels();
    const int w_filter = 2 * r + 1; // 滤波器边长  
 
    double gaussian_d_coeff = -0.5 / (sigma_d * sigma_d);
    double gaussian_r_coeff = -0.5 / (sigma_r * sigma_r);
    double  **d_metrix = new double *[w_filter];
    for (int i = 0; i < w_filter; ++i)
        d_metrix[i] = new double[w_filter];
    
    double r_metrix[256];  // similarity weight  
    img_out = cv::Mat(img_in.size(),img_in.type());
    uchar* m_imgData = img_in.ptr<uchar>(0);
    uchar* m_img_outData = img_out.ptr<uchar>(0);
    // copy the original image  
    double* img_tmp = new double[m_nChannels * nx * ny];
    for (i = 0; i < ny; i++)
        for (j = 0; j < nx; j++)
            for (k = 0; k < m_nChannels; k++)
            {
                img_tmp[i * m_nChannels * nx + m_nChannels * j + k] = m_imgData[i * m_nChannels * nx + m_nChannels * j + k];
            }
 
    // compute spatial weight  
    for (i = -r; i <= r; i++)
        for (j = -r; j <= r; j++)
        {
            int x = j + r;
            int y = i + r;
 
            d_metrix[y][x] = exp((i * i + j * j) * gaussian_d_coeff);
        }
 
    // compute similarity weight  
    for (i = 0; i < 256; i++)
    {
        r_metrix[i] = exp(i * i * gaussian_r_coeff);
    }
 
    // bilateral filter  
    for (i = 0; i < ny; i++)
        for (j = 0; j < nx; j++)
        {
            for (k = 0; k < m_nChannels; k++)
            {
                double weight_sum, pixcel_sum;
                weight_sum = pixcel_sum = 0.0;
 
                for (m = -r; m <= r; m++)
                    for (n = -r; n <= r; n++)
                    {
                        if (m*m + n*n > r*r) continue;
 
                        int x_tmp = j + n;
                        int y_tmp = i + m;
 
                        x_tmp = x_tmp < 0 ? 0 : x_tmp;
                        x_tmp = x_tmp > nx - 1 ? nx - 1 : x_tmp;   // 边界处理,replicate  
                        y_tmp = y_tmp < 0 ? 0 : y_tmp;
                        y_tmp = y_tmp > ny - 1 ? ny - 1 : y_tmp;
 
                        int pixcel_dif = (int)abs(img_tmp[y_tmp * m_nChannels * nx + m_nChannels * x_tmp + k] - img_tmp[i * m_nChannels * nx + m_nChannels * j + k]);
                        double weight_tmp = d_metrix[m + r][n + r] * r_metrix[pixcel_dif];  // 复合权重  
 
                        pixcel_sum += img_tmp[y_tmp * m_nChannels * nx + m_nChannels * x_tmp + k] * weight_tmp;
                        weight_sum += weight_tmp;
                    }
 
                pixcel_sum = pixcel_sum / weight_sum;
                m_img_outData[i * m_nChannels * nx + m_nChannels * j + k] = (uchar)pixcel_sum;
 
            } // 一个通道  
 
        } // END ALL LOOP  
    for (i = 0; i < w_filter; i++)
        delete[] d_metrix[i];
    delete[] d_metrix;
}
 

5. Opencv API实现

opencv相关函数简介:

双边滤波函数:bilateralFilter(InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace,int borderType=BORDER_DEFAULT )

   src待滤波图像

   dst滤波后图像

   d滤波器半径

   sigmaColor滤波器值域的sigma

   sigmaSpace滤波器空间域的sigma

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

均值滤波函数:blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), intborderType=BORDER_DEFAULT );

   src待滤波图像

   dst滤波后图像

   ksize 均值滤波器的大小

Piont(-1,-1)指中心

   anchor均值滤波器的锚点也就是模板移动点

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

高斯滤波函数:GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0,int borderType=BORDER_DEFAULT );

   src待滤波图像

   dst滤波后图像

   ksize 高斯滤波器的大小Size(x,y),x和y必须是整数且是奇数。

   sigmaX 高斯滤波器的x方向的滤波器高斯sigma

   sigmaY 高斯滤波器的y方向的滤波器高斯sigma

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

中值滤波函数:medianBlur(InputArray src, OutputArray dst, int ksize );

    src待滤波图像

    dst滤波后图像

    ksize 中值滤波器的大小

函数演示:

void bilateral_filter_show(void)
{
    cv::Mat mat1 = cv::imread("F:\\CVlibrary\\obama.jpg", CV_LOAD_IMAGE_GRAYSCALE); //灰度图加载进来,BGR->HSV 然后取H参数
    if (mat1.empty())
        return;
    cv::imshow("原图像", mat1); 
    cv::Mat src = cv::imread("F:\\CVlibrary\\obama.jpg");
    cv::imshow("原始彩色图像", src);
    std::cout << "channel = " << mat1.channels() << std::endl;
    
    cv::Mat mat3;
    cv::bilateralFilter(src, mat3, 5, 50, 50,cv::BORDER_DEFAULT);
    cv::imshow("opencv给出的双边滤波器", mat3);
    cv::Mat mat4;
    cv::blur(src, mat4, cv::Size(3, 3));
    cv::imshow("均值滤波", mat4);
    cv::Mat mat5;
    cv::GaussianBlur(src, mat5, cv::Size(5, 5), 1,1);
    cv::imshow("高斯滤波器", mat5);
    cv::Mat mat6;
    cv::medianBlur(src, mat6, 3);
    cv::imshow("中值滤波", mat6); 
    cv::Mat mat7;
    lmt_gaussian_blur(src, mat7, cv::Size(5, 5));
    cv::imshow("my gaussian image",mat7);
 
    cv::waitKey(0);
}

高斯、中值、均值、双边滤波的效果

#include "cv.h"
#include "highgui.h"
#include <iostream>

using namespace std;
using namespace cv;

int main(int argc, char* argv[])
{
        Mat src = imread("misaka.jpg");
        Mat dst;

        //参数是按顺序写的

        //高斯滤波
        //src:输入图像
        //dst:输出图像
        //Size(5,5)模板大小,为奇数
        //x方向方差
        //Y方向方差
        GaussianBlur(src,dst,Size(5,5),0,0);
        imwrite("gauss.jpg",dst);
        
        //中值滤波
        //src:输入图像
        //dst::输出图像
        //模板宽度,为奇数
        medianBlur(src,dst,3);
        imwrite("med.jpg",dst);
        
        //均值滤波
        //src:输入图像
        //dst:输出图像
        //模板大小
        //Point(-1,-1):被平滑点位置,为负值取核中心
        blur(src,dst,Size(3,3),Point(-1,-1));
        imwrite("mean.jpg",dst);

        //双边滤波
        //src:输入图像
        //dst:输入图像
        //滤波模板半径
        //颜色空间标准差
        //坐标空间标准差
        bilateralFilter(src,dst,5,10.0,2.0);//这里滤波没什么效果,不明白
        imwrite("bil.jpg",dst);

        waitKey();

        return 0;
}
2012-02-27 13:28:13 lj695242104 阅读数 25816

描述:均值滤波器是图像处理中一种常见的滤波器,它主要应用于平滑噪声。它的原理主要是利用某像素点周边像素的平均值来打到平滑噪声的效果。

常用的均值核如下图所示:


            


图像滤波器操作实际上就是模板操作,对于模板操作我们应该注意边界问题:

什么是边界问题?

对于边界问题就是当图像处理边界像素的时候,卷积核与图像使用区域不能匹配,计算出现问题。


处理方法:

1、忽略边界像素,即丢掉不能匹配的像素

2、保留边界像素,即复制源图像的不能匹配的边界像素到输出图像



Code:

  /**
   * Calculates the mean of a 3x3 pixel neighbourhood (including centre pixel).
   *
   * @param input the input image 2D array
   * @param kernel the kernel 2D array
   * @param w the image width
   * @param h the image height
   * @param x the x coordinate of the centre pixel of the array
   * @param y the y coordinate of the centre pixel of the array
   * @return the mean of the 9 pixels
   */ 
  public static int meanNeighbour(int [][] input, int [][] kernel,
			  int w, int h, int x, int y) {

    int sum = 0;
    int number = 0;
    for(int j=0;j<3;++j){
      for(int i=0;i<3;++i){
	if((kernel[i][j]==1) && 
	   ((x-1+i)>=0) && ((y-1+j)>=0) && ((x-1+i)<w) && ((y-1+j)<h) ){
	  sum = sum + input[x-1+i][y-1+j];
	  ++number;
	}
      }
    }
    if(number==0) return 0;
    return (sum/number);
  }


  /**
   * Takes an image in 2D array form and smoothes it according to the kernel.
   * @param input the input image
   * @kernel the kernel 1D array
   * @param width of the input image
   * @param height of the output image
   * @param iterations to be performed
   * @return the new smoothed image 2D array
   */
  public static int [][] smooth(int [][] input, int [][] kernel,
				int width, int height, int iterations){
    int [][] temporary = new int [width][height];
    int [][] outputArrays = new int [width][height];
    temporary = (int [][]) input.clone();
    for (int its=0;its<iterations;++its){
      for(int j=0;j<height;++j){
	for(int i=0;i<width;++i){
	  outputArrays[i][j] = meanNeighbour(temporary,kernel,
					     width,height,i,j);
	}
      }
      for(int j=0;j<height;++j){
	for(int i=0;i<width;++i){
	  temporary[i][j]=outputArrays[i][j];
	}
      }
    }
    return outputArrays;
  }


Input Image:




Output Image:




总结:均值滤波器就是为了平滑周边的效果,不会因为图像上突出的点而感到难受,达到一种“Softened”的效果。

没有更多推荐了,返回首页