2020-01-02 22:28:56 jackzhang11 阅读数 209

在图像处理、计算机视觉领域,我们有时需要对原始图像进行预处理。图像滤波是一种比较常用的方法,通过滤波操作,就可以突出一些特征或者去除图像中不需要的成分。通过选取不同的滤波器,在原始图像上进行滑动和卷积,借助相邻的像素值就可以决定该像素最后的输出。最常见的算子分为两类,一个是图像平滑去噪功能、一个是边缘检测功能,下文中会对这两类进行展开。

平滑滤波器

1. 高斯滤波

高斯滤波器是一种可以使图像平滑的滤波器,用于去除噪声。

高斯滤波器将中心像素周围的像素按照高斯分布加权平均进行平滑化。这样的二维权值通常被称为卷积核(kernel)或者滤波器(filter)。

但是,由于图像的长宽可能不是滤波器大小的整数倍,同时我们希望输出图像的维度与输入图像一致,因此我们需要在图像的边缘补00,具体补几个00视滤波器与图像的大小关系确定,这种方法称作Zero Padding。同时,权值gg(卷积核)要进行归一化操作(g=1\sum g = 1)。

按下面的高斯分布公式计算权值: g(x,y,σ)=12πσ2 ex2+y22σ2 g(x,y,\sigma)=\frac{1}{2\pi\sigma^2}\ e^{-\frac{x^2+y^2}{2\sigma^2}}

其中 xxyy 的坐标是以当前滤波器的中心点为基准。例如中心点右上方各1格的坐标对,是(1,-1)。

标准差σ=1.3\sigma=1.388-近邻高斯滤波器近似如下: K=116 [121242121] K=\frac{1}{16}\ \left[ \begin{matrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 1 \end{matrix} \right]

2. 中值滤波

中值滤波器也是一种可以使图像平滑的滤波器,一定程度上可以去除图像的噪声,同时图像的细节也会变得模糊。这种滤波器是提取出滤波器范围内(在这里假设是3×33\times3)像素点的中值。为了保证输出图像的大小和输入一样,需要采用Zero Padding。

3. 均值滤波

与中值滤波相似,均值滤波也是用于图像降噪的。唯一与中值滤波不同的是,均值滤波对于滤波器范围内的像素点,计算他们的均值作为输出。

边缘检测滤波器

1. Sobel滤波器

Sobel滤波器可以提取特定方向(纵向或横向)的边缘,滤波器按下式定义:

水平Sobel算子: K=[121000121] K=\left[ \begin{matrix} 1&2&1\\ 0&0&0\\ -1&-2&-1 \end{matrix} \right] 竖直Sobel算子: K=[101202101] K=\left[ \begin{matrix} 1&0&-1\\2&0&-2\\ 1&0&-1 \end{matrix} \right]

Sobel算子可以近似的计算出图像相邻像素之间的梯度。假设滤波器现在滑动到背景部分,那么滤波器卷积计算得到的值就非常小;反之,如果滤波器在背景和前景分界出,那么滤波器滑过得到的卷积数值就会比较大。因此可以较好的提取出图像的边缘信息。

2. Prewitt滤波器

Prewitt滤波器也是用于边缘检测的一种滤波器,使用下式定义:

水平Prewitt算子: K=[111000111] K=\left[ \begin{matrix} -1&-1&-1\\ 0&0&0\\ 1&1&1 \end{matrix} \right] 竖直Prewitt算子: K=[101101101] K=\left[ \begin{matrix} -1&0&-1\\ -1&0&1\\ -1&0&1 \end{matrix} \right]

Prewitt算子与Sobel算子不同的是,Sobel算子考虑了权值的因素,即在中心点正上方或正下方(正左和正右)的权值为2,因为这个像素点离中心更近,而离中心远一点的斜侧方的权值为1;而Prewitt中没有这种设定。总的来说,Sobel算是对Prewitt的一种改进,效果也自然更好一点。

3. Laplacian滤波器

有别于Sobel算子和Prewitt算子这两类一阶微分滤波器,Laplacian滤波器是对图像亮度进行二次微分从而检测边缘的滤波器。由于数字图像是离散的,xx方向和yy方向的一次微分分别按照以下式子计算: Ix(x,y)=I(x+1,y)I(x,y)(x+1)x=I(x+1,y)I(x,y)  I_x(x,y)=\frac{I(x+1,y)-I(x,y)}{(x+1)-x}=I(x+1,y)-I(x,y)\ Iy(x,y)=I(x,y+1)I(x,y)(y+1)y=I(x,y+1)I(x,y)I_y(x,y) =\frac{I(x, y+1) - I(x,y)}{(y+1)-y}= I(x, y+1) - I(x,y) 所以二次微分按照以下式子计算: Ixx(x,y) =Ix(x,y)Ix(x1,y)(x+1)x=Ix(x,y)Ix(x1,y)=[I(x+1,y)I(x,y)][I(x,y)I(x1,y)]=I(x+1,y)2 I(x,y)+I(x1,y) I_{xx}(x,y) \ =\frac{I_x(x,y) - I_x(x-1,y)}{(x+1)-x}=I_x(x,y) - I_x(x-1,y)=[I(x+1, y) - I(x,y)] - [I(x, y) - I(x-1,y)] =I(x+1,y) - 2\ I(x,y) + I(x-1,y) 同理: Iyy(x,y)=I(x,y+1)2 I(x,y)+I(x,y1) I_{yy}(x,y)=I(x,y+1)-2\ I(x,y)+I(x,y-1) 因此,Laplacian 表达式如下: 2 I(x,y) =Ixx(x,y)+Iyy(x,y) =I(x1,y)+I(x,y1)4I(x,y)+I(x+1,y)+I(x,y+1)\nabla^2\ I(x,y)\ =I_{xx}(x,y)+I_{yy}(x,y)\ =I(x-1,y) + I(x,y-1) - 4 * I(x,y) + I(x+1,y) + I(x,y+1) 把这个式子表示为卷积核是下面这样的: K=[010141010] K= \left[ \begin{matrix} 0&1&0\\ 1&-4&1\\ 0&1&0 \end{matrix} \right]

参考:https://github.com/gzr2017/ImageProcessing100Wen

2013-12-08 18:02:49 thnh169 阅读数 24948

       1.高通滤波器

       首先,对一副图像进行如下二维傅里叶变换。

我们将u=0和v=0带上式,我们可以得到如下式子。

根据上式,可以到F(0,0)的值是非常大的。这里,我们将F(0,0)称为直流分量,直流分量比其他的成分要大好几个数量级。所以,这也就是傅里叶谱为什么需要使用对数变换才能看清楚的原因。
       这里,对于高通滤波器而言,由于直流分量被衰减,所以,所得到的图像的动态范围是非常狭窄的,也就造成了图像偏灰。进一步而言,保持直流(DC)分量,对别的部分进行增幅,可以增强图像的细节。这样的滤波器称为锐化滤波器。这一小节主要介绍高通滤波器与锐化滤波器。

        1.1理想高通滤波器

        
        这里的D0是滤波器的阻带半径,而D(u,v)是点到滤波器中央的距离。理想高通的滤波器的振幅特性如下所示。

用这个滤波器对图像进行处理,可得到如下所示的结果。我们可以看到,与理想的低通滤波器一样,所得到的图像有很明显的振铃现象。结果图像从视觉上来看,有些偏暗,这是因为图像的直流分量被滤掉的原因。


       1.2巴特沃斯高通滤波器

       同样的,巴特沃斯高通滤波器也可以通过改变次数n,对过度特性进行调整。过大的n会造成振铃现象。


       1.3高斯高通滤波器

       高斯滤波器的过度特性很好,所以不会发生振铃现象。


       1.4 上述三种滤波器的实现Matlab代码

close all;
clear all;

%% ---------Ideal Highpass Filters (Fre. Domain)------------
f = imread('characters_test_pattern.tif');
f = mat2gray(f,[0 255]);

[M,N] = size(f);
P = 2*M;
Q = 2*N;
fc = zeros(M,N);

for x = 1:1:M
    for y = 1:1:N
        fc(x,y) = f(x,y) * (-1)^(x+y);
    end
end

F = fft2(fc,P,Q);

H_1 = ones(P,Q);
H_2 = ones(P,Q);

for x = (-P/2):1:(P/2)-1
     for y = (-Q/2):1:(Q/2)-1
        D = (x^2 + y^2)^(0.5);
        if(D <= 60)  H_1(x+(P/2)+1,y+(Q/2)+1) = 0; end    
        if(D <= 160) H_2(x+(P/2)+1,y+(Q/2)+1) = 0; end
     end
end

G_1 = H_1 .* F;
G_2 = H_2 .* F;

g_1 = real(ifft2(G_1));
g_1 = g_1(1:1:M,1:1:N);

g_2 = real(ifft2(G_2));
g_2 = g_2(1:1:M,1:1:N);         

for x = 1:1:M
    for y = 1:1:N
        g_1(x,y) = g_1(x,y) * (-1)^(x+y);
        g_2(x,y) = g_2(x,y) * (-1)^(x+y);
    end
end
%% -----show-------
figure();
subplot(1,2,1);
imshow(f,[0 1]);
xlabel('a).Original Image');

subplot(1,2,2);
imshow(log(1 + abs(F)),[ ]);
xlabel('b).Fourier spectrum of a');

figure();
subplot(1,2,1);
imshow(H_1,[0 1]);
xlabel('c).Ideal Highpass filter(D=60)');

subplot(1,2,2);
h = mesh(1:20:P,1:20:Q,H_1(1:20:P,1:20:Q));
set(h,'EdgeColor','k');
axis([0 P 0 Q 0 1]);
xlabel('u');ylabel('v');
zlabel('|H(u,v)|');

figure();
subplot(1,2,1);
imshow(log(1 + abs(G_1)),[ ]);
xlabel('d).Result of filtering using c');

subplot(1,2,2);
imshow(g_1,[0 1]);
xlabel('e).Result image');

figure();
subplot(1,2,1);
imshow(H_2,[0 1]);
xlabel('f).Ideal Highpass filter(D=160)');

subplot(1,2,2);
h = mesh(1:20:P,1:20:Q,H_2(1:20:P,1:20:Q));
set(h,'EdgeColor','k');
axis([0 P 0 Q 0 1]);
xlabel('u');ylabel('v');
zlabel('|H(u,v)|');

figure();
subplot(1,2,1);
imshow(log(1 + abs(G_2)),[ ]);
xlabel('g).Result of filtering using e');

subplot(1,2,2);
imshow(g_2,[0 1]);
xlabel('h).Result image');
close all;
clear all;

%% ---------Butterworth Highpass Filters (Fre. Domain)------------
f = imread('characters_test_pattern.tif');
f = mat2gray(f,[0 255]);

[M,N] = size(f);
P = 2*M;
Q = 2*N;
fc = zeros(M,N);

for x = 1:1:M
    for y = 1:1:N
        fc(x,y) = f(x,y) * (-1)^(x+y);
    end
end

F = fft2(fc,P,Q);

H_1 = zeros(P,Q);
H_2 = zeros(P,Q);

for x = (-P/2):1:(P/2)-1
     for y = (-Q/2):1:(Q/2)-1
        D = (x^2 + y^2)^(0.5);
        D_0 = 100;
        H_1(x+(P/2)+1,y+(Q/2)+1) = 1/(1+(D_0/D)^2);   
        H_2(x+(P/2)+1,y+(Q/2)+1) = 1/(1+(D_0/D)^6);
     end
end


G_1 = H_1 .* F;
G_2 = H_2 .* F;

g_1 = real(ifft2(G_1));
g_1 = g_1(1:1:M,1:1:N);

g_2 = real(ifft2(G_2));
g_2 = g_2(1:1:M,1:1:N);         

for x = 1:1:M
    for y = 1:1:N
        g_1(x,y) = g_1(x,y) * (-1)^(x+y);
        g_2(x,y) = g_2(x,y) * (-1)^(x+y);
    end
end

%% -----show-------
figure();
subplot(1,2,1);
imshow(f,[0 1]);
xlabel('a).Original Image');

subplot(1,2,2);
imshow(log(1 + abs(F)),[ ]);
xlabel('b).Fourier spectrum of a');

figure();
subplot(1,2,1);
imshow(H_1,[0 1]);
xlabel('c)Butterworth Lowpass (D_{0}=100,n=1)');

subplot(1,2,2);
h = mesh(1:20:P,1:20:Q,H_1(1:20:P,1:20:Q));
set(h,'EdgeColor','k');
axis([0 P 0 Q 0 1]);
xlabel('u');ylabel('v');
zlabel('|H(u,v)|');

figure();
subplot(1,2,1);
imshow(log(1 + abs(G_1)),[ ]);
xlabel('d).Result of filtering using c');

subplot(1,2,2);
imshow(g_1,[0 1]);
xlabel('e).Result image');

figure();
subplot(1,2,1);
imshow(H_2,[0 1]);
xlabel('f).Butterworth Lowpass (D_{0}=100,n=3)');

subplot(1,2,2);
h = mesh(1:20:P,1:20:Q,H_2(1:20:P,1:20:Q));
set(h,'EdgeColor','k');
axis([0 P 0 Q 0 1]);
xlabel('u');ylabel('v');
zlabel('|H(u,v)|');

figure();
subplot(1,2,1);
imshow(log(1 + abs(G_2)),[ ]);
xlabel('g).Result of filtering using e');

subplot(1,2,2);
imshow(g_2,[0 1]);
xlabel('h).Result image');
close all;
clear all;

%% ---------Gaussian Highpass Filters (Fre. Domain)------------
f = imread('characters_test_pattern.tif');
f = mat2gray(f,[0 255]);

[M,N] = size(f);
P = 2*M;
Q = 2*N;
fc = zeros(M,N);

for x = 1:1:M
    for y = 1:1:N
        fc(x,y) = f(x,y) * (-1)^(x+y);
    end
end

F = fft2(fc,P,Q);

H_1 = zeros(P,Q);
H_2 = zeros(P,Q);

for x = (-P/2):1:(P/2)-1
     for y = (-Q/2):1:(Q/2)-1
        D = (x^2 + y^2)^(0.5);
        D_0 = 60;
        H_1(x+(P/2)+1,y+(Q/2)+1) = 1 - exp(-(D*D)/(2*D_0*D_0));   
        D_0 = 160;
        H_2(x+(P/2)+1,y+(Q/2)+1) = 1 - exp(-(D*D)/(2*D_0*D_0));
     end
end

G_1 = H_1 .* F;
G_2 = H_2 .* F;

g_1 = real(ifft2(G_1));
g_1 = g_1(1:1:M,1:1:N);

g_2 = real(ifft2(G_2));
g_2 = g_2(1:1:M,1:1:N);         

for x = 1:1:M
    for y = 1:1:N
        g_1(x,y) = g_1(x,y) * (-1)^(x+y);
        g_2(x,y) = g_2(x,y) * (-1)^(x+y);
    end
end


%% -----show-------
figure();
subplot(1,2,1);
imshow(f,[0 1]);
xlabel('a).Original Image');

subplot(1,2,2);
imshow(log(1 + abs(F)),[ ]);
xlabel('b).Fourier spectrum of a');

figure();
subplot(1,2,1);
imshow(H_1,[0 1]);
xlabel('c)Gaussian Highpass (D_{0}=60)');

subplot(1,2,2);
h = mesh(1:20:P,1:20:Q,H_1(1:20:P,1:20:Q));
set(h,'EdgeColor','k');
axis([0 P 0 Q 0 1]);
xlabel('u');ylabel('v');
zlabel('|H(u,v)|');

figure();
subplot(1,2,1);
imshow(log(1 + abs(G_1)),[ ]);
xlabel('d).Result of filtering using c');

subplot(1,2,2);
imshow(g_1,[0 1]);
xlabel('e).Result image');

figure();
subplot(1,2,1);
imshow(H_2,[0 1]);
xlabel('f).Gaussian Highpass (D_{0}=160)');

subplot(1,2,2);
h = mesh(1:20:P,1:20:Q,H_2(1:20:P,1:20:Q));
set(h,'EdgeColor','k');
axis([0 P 0 Q 0 1]);
xlabel('u');ylabel('v');
zlabel('|H(u,v)|');

figure();
subplot(1,2,1);
imshow(log(1 + abs(G_2)),[ ]);
xlabel('g).Result of filtering using e');

subplot(1,2,2);
imshow(g_2,[0 1]);
xlabel('h).Result image');

       1.5 锐化滤波器

       按照之前所说的,锐化滤波器是将傅里叶谱的直流分量保留,然后将其余的成分增幅。使用锐化滤波器,可以对图像的细节进行增强,使得细节凸显出来。锐化滤波器的表达式如下所示。

      其实上式的目的很明显,就是先将原图的傅里叶保留下来,然后叠加上高通滤波器的结果,所得到的图像就是锐化后的图像了。这里为了调整锐化程度,引入了两个变量可以调整直流分量的衰减程度,可以调整高频分量的增幅程度。

       下面是代码。
close all;
clear all;
clc;

%% ---------The High-Fre-Emphasis Filters (Fre. Domain)------------
f = imread('blurry_moon.tif');
f = mat2gray(f,[0 255]);

[M,N] = size(f); 
P = 2*M;
Q = 2*N;
fc = zeros(M,N);

for x = 1:1:M
    for y = 1:1:N
        fc(x,y) = f(x,y) * (-1)^(x+y);
    end
end

F = fft2(fc,P,Q);

H_HP = zeros(P,Q);

for x = (-P/2):1:(P/2)-1
     for y = (-Q/2):1:(Q/2)-1
        D = (x^2 + y^2)^(0.5);
        D_0 = 80;
        H_HP(x+(P/2)+1,y+(Q/2)+1) = 1 - exp(-(D*D)/(2*D_0*D_0));   
     end
end

G_HP = H_HP .* F;

G_HFE = (1 + 1.1 * H_HP) .* F;

g_1 = real(ifft2(G_HP));
g_1 = g_1(1:1:M,1:1:N);

g_2 = real(ifft2(G_HFE));
g_2 = g_2(1:1:M,1:1:N);

for x = 1:1:M
    for y = 1:1:N
        g_1(x,y) = g_1(x,y) * (-1)^(x+y);
        g_2(x,y) = g_2(x,y) * (-1)^(x+y);
    end
end

g = histeq(g_2);

%g_1 = mat2gray(g_1);
%% -----show-------
figure();
subplot(1,2,1);
imshow(f,[0 1]);
xlabel('a).Original Image');

subplot(1,2,2);
imshow(log(1 + abs(F)),[ ]);
xlabel('b).Fourier spectrum of a');

figure();
subplot(1,2,1);
imshow(g_1,[0 1]);
xlabel('c).Result image of High-pass Filter');

subplot(1,2,2);
imshow(log(1 + abs(G_HP)),[ ]);
xlabel('d).Result of filtering using e');

figure();
subplot(1,2,1);
imshow(g_2,[0 1]);
xlabel('e).Result image of High-Fre-Emphasis Filter');

subplot(1,2,2);
imshow(log(1 + abs(G_HFE)),[ ]);
xlabel('f).Result of filtering using e');

       2.带阻滤波器

       同样的,带阻滤波器也有三种特性。高斯、巴特沃斯和理想,三种类型,其数学表达式如下所示。

       其带通滤波器可以使用上面的表格转化而得。

       带阻滤波器可以用于去除周期性噪声,为了体现带阻滤波器的特性,我们先对一幅图像增加很严重的噪声。


       在原图的傅里叶谱上添加了几个很明显的亮点。在对其做IDFT,可以看到,原图被严重的周期噪声污染了。此时,我们可以使用带阻滤波器,可以有很好的去噪效果。为了避免振铃现象,选择使用如下所示巴特沃斯带阻滤波器,所用滤波器的次数为2次。使用空间域的操作,要去除这种噪声基本是不可能的,这也是频域内的操作的优点。

  
close all;
clear all;
clc;
%% ---------------------Add Noise-------------------------
f = imread('left.tif');
f = mat2gray(f,[0 255]);

[M,N] = size(f);
P = 2*M;
Q = 2*N;
fc = zeros(M,N);

for x = 1:1:M
    for y = 1:1:N
        fc(x,y) = f(x,y) * (-1)^(x+y);
    end
end

F = fft2(fc,P,Q);

H_NP = ones(P,Q);

for x = (-P/2):1:(P/2)-1
     for y = (-Q/2):1:(Q/2)-1
        D = 2;
        
        v_k = -200; u_k = 150;
        D_k = ((x+u_k)^2 + (y+v_k)^2)^(0.5);
        H_NP(x+(P/2)+1,y+(Q/2)+1) = H_NP(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        D_k = ((x-u_k)^2 + (y-v_k)^2)^(0.5);
        H_NP(x+(P/2)+1,y+(Q/2)+1) = H_NP(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        
        v_k = 200; u_k = 150;
        D_k = ((x+u_k)^2 + (y+v_k)^2)^(0.5);
        H_NP(x+(P/2)+1,y+(Q/2)+1) = H_NP(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        D_k = ((x-u_k)^2 + (y-v_k)^2)^(0.5);
        H_NP(x+(P/2)+1,y+(Q/2)+1) = H_NP(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        
        v_k = 0; u_k = 250;
        D_k = ((x+u_k)^2 + (y+v_k)^2)^(0.5);
        H_NP(x+(P/2)+1,y+(Q/2)+1) = H_NP(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        D_k = ((x-u_k)^2 + (y-v_k)^2)^(0.5);
        H_NP(x+(P/2)+1,y+(Q/2)+1) = H_NP(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        
        v_k = 250; u_k = 0;
        D_k = ((x+u_k)^2 + (y+v_k)^2)^(0.5);
        H_NP(x+(P/2)+1,y+(Q/2)+1) = H_NP(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        D_k = ((x-u_k)^2 + (y-v_k)^2)^(0.5);
        H_NP(x+(P/2)+1,y+(Q/2)+1) = H_NP(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        
        
        H_NP(x+(P/2)+1,y+(Q/2)+1) = 1 + 700*(1 - H_NP(x+(P/2)+1,y+(Q/2)+1));
     end
end

G_Noise = F .* H_NP;

g_noise = real(ifft2(G_Noise));
g_noise = g_noise(1:1:M,1:1:N);     

for x = 1:1:M
    for y = 1:1:N
        g_noise(x,y) = g_noise(x,y) * (-1)^(x+y);
    end
end


%% ---------Bondpass Filters (Fre. Domain)------------
H_1 = ones(P,Q);

for x = (-P/2):1:(P/2)-1
     for y = (-Q/2):1:(Q/2)-1
        D = (x^2 + y^2)^(0.5);
        D_0 = 250;
        W = 30;
        H_1(x+(P/2)+1,y+(Q/2)+1) = 1/(1+((D*W)/((D*D) - (D_0*D_0)))^6);   
     end
end

G_1 = H_1 .* G_Noise;

g_1 = real(ifft2(G_1));
g_1 = g_1(1:1:M,1:1:N);     

for x = 1:1:M
    for y = 1:1:N
        g_1(x,y) = g_1(x,y) * (-1)^(x+y);
    end
end

%% -----show-------
close all;

figure();
subplot(1,2,1);
imshow(f,[0 1]);
xlabel('a).Original Image');

subplot(1,2,2);
imshow(log(1 + abs(F)),[ ]);
xlabel('b).Fourier spectrum of a');


figure();
subplot(1,2,2);
imshow(log(1 + abs(G_Noise)),[ ]);
xlabel('c).Fourier spectrum of b');

subplot(1,2,1);
imshow(g_noise,[0 1]);
xlabel('b).Result of add noise');


figure();
subplot(1,2,1);
imshow(H_1,[0 1]);
xlabel('d).Butterworth Bandpass filter(D=355 W=40 n =2)');

subplot(1,2,2);
h = mesh(1:20:Q,1:20:P,H_1(1:20:P,1:20:Q));
set(h,'EdgeColor','k');
axis([0 Q 0 P 0 1]);
xlabel('u');ylabel('v');
zlabel('|H(u,v)|');


figure();
subplot(1,2,2);
imshow(log(1 + abs(G_1)),[ ]);
xlabel('e).Fourier spectrum of f');

subplot(1,2,1);
imshow(g_1,[0 1]);
xlabel('f).Result of denoise');

       3.陷波滤波器(Notch Filter)

       陷波滤波器也用于去除周期噪声,虽然带阻滤波器也能可以去除周期噪声,但是带阻滤波器对噪声以外的成分也有衰减。而陷波滤波器主要对,某个点进行衰减,对其余的成分不损失。使用下图将更容易理解。

       从空间域内看,图像存在着周期性噪声。其傅里叶频谱中,也能看到噪声的所在之处(这里我用红圈标注出来了,他们不是数据的一部分)。我们如果使用带阻滤波器的话,是非常麻烦的,也会使得图像有损失。陷波滤波器,能够直接对噪声处进行衰减,可以有很好的去噪效果。
       其表达式如下所示,陷波滤波器可以通过对高通滤波器的中心,进行位移就可以得到了。

这里,由于傅里叶的周期性,傅里叶频谱上不可能单独存在一个点的噪声,必定是关于远点对称的一个噪声对。这里的需要去除的噪声点,其求取的方式如下所示。
       针对于上图,我们设计如下滤波器,去进行去噪。
(图片下标错了,后续更新改过来!)
       所得到的结果,如下所示。噪声已经被去除了,画质得到了很大的改善。

                                                              (图片下标错了,后续更新改过来!)
close all;
clear all;
clc;

%% ---------Butterworth Notch filter (Fre. Domain)------------
f = imread('car_75DPI_Moire.tif');
f = mat2gray(f,[0 255]);

[M,N] = size(f);
P = 2*M;
Q = 2*N;
fc = zeros(M,N);

for x = 1:1:M
    for y = 1:1:N
        fc(x,y) = f(x,y) * (-1)^(x+y);
    end
end

F = fft2(fc,P,Q);

H_NF = ones(P,Q);

for x = (-P/2):1:(P/2)-1
     for y = (-Q/2):1:(Q/2)-1
        D = 30;
        
        v_k = 59; u_k = 77;
        D_k = ((x+u_k)^2 + (y+v_k)^2)^(0.5);
        H_NF(x+(P/2)+1,y+(Q/2)+1) = H_NF(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        D_k = ((x-u_k)^2 + (y-v_k)^2)^(0.5);
        H_NF(x+(P/2)+1,y+(Q/2)+1) = H_NF(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        
        v_k = 59; u_k = 159;
        D_k = ((x+u_k)^2 + (y+v_k)^2)^(0.5);
        H_NF(x+(P/2)+1,y+(Q/2)+1) = H_NF(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        D_k = ((x-u_k)^2 + (y-v_k)^2)^(0.5);
        H_NF(x+(P/2)+1,y+(Q/2)+1) = H_NF(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        
        v_k = -54; u_k = 84;
        D_k = ((x+u_k)^2 + (y+v_k)^2)^(0.5);
        H_NF(x+(P/2)+1,y+(Q/2)+1) = H_NF(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        D_k = ((x-u_k)^2 + (y-v_k)^2)^(0.5);
        H_NF(x+(P/2)+1,y+(Q/2)+1) = H_NF(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        
        v_k = -54; u_k = 167;
        D_k = ((x+u_k)^2 + (y+v_k)^2)^(0.5);
        H_NF(x+(P/2)+1,y+(Q/2)+1) = H_NF(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
        D_k = ((x-u_k)^2 + (y-v_k)^2)^(0.5);
        H_NF(x+(P/2)+1,y+(Q/2)+1) = H_NF(x+(P/2)+1,y+(Q/2)+1) * 1/(1+(D/D_k)^4);
     end
end

G_1 = H_NF .* F;

g_1 = real(ifft2(G_1));
g_1 = g_1(1:1:M,1:1:N);     

for x = 1:1:M
    for y = 1:1:N
        g_1(x,y) = g_1(x,y) * (-1)^(x+y);
    end
end

%% -----show-------
close all;

figure();
subplot(1,2,1);
imshow(f,[0 1]);
xlabel('a).Original Image');

subplot(1,2,2);
imshow(log(1 + abs(F)),[ ]);
xlabel('b).Fourier spectrum of a');

figure();
subplot(1,2,1);
imshow(H_NF,[0 1]);
xlabel('c).Butterworth Notch filter(D=30 n=2)');

subplot(1,2,2);
h = mesh(1:10:Q,1:10:P,H_NF(1:10:P,1:10:Q));
set(h,'EdgeColor','k');
axis([0 Q 0 P 0 1]);
xlabel('u');ylabel('v');
zlabel('|H(u,v)|');

figure();
subplot(1,2,2);
imshow(log(1 + abs(G_1)),[ ]);
xlabel('e).Fourier spectrum of d');

subplot(1,2,1);
imshow(g_1,[0 1]);
xlabel('d).Result image');
原文发于博客:http://blog.csdn.net/thnh169/ 

2019-04-26 10:50:55 u013307195 阅读数 761

滤波器主要两类:线性和非线性

线性滤波器:

使用连续窗函数内像素加权和来实现滤波,同一模式的权重因子可以作用在每一个窗口内,即线性滤波器是空间不变的。如果图像的不同部分使用不同的滤波权重因子,线性滤波器是空间可变的。因此可以使用卷积模板来实现滤波。线性滤波器对去除高斯噪声有很好的效果。常用的线性滤波器有均值滤波器和高斯平滑滤波器。

(1) 均值滤波器:
最简单均值滤波器是局部均值运算,即每一个像素只用其局部邻域内所有值的平均值来置换.

(2) 高斯平滑滤波器:
是一类根据高斯函数的形状来选择权值的线性滤波器。 高斯平滑滤波器对去除服从正态分布的噪声是很有效的。

非线性滤波器:

(1) 中值滤波器:
均值滤波和高斯滤波运算主要问题是有可能模糊图像中尖锐不连续的部分。中值滤波器的基本思想使用像素点邻域灰度值的中值来代替该像素点的灰度值,它可以去除脉冲噪声、椒盐噪声同时保留图像边缘细节。中值滤波不依赖于邻域内与典型值差别很大的值,处理过程不进行加权运算。中值滤波在一定条件下可以克服线性滤波器所造成的图像细节模糊,而对滤除脉冲干扰很有效。
(2) 边缘保持滤波器:
由于均值滤波(平滑图像外还可能导致图像边缘模糊)和中值滤波(去除脉冲噪声的同时可能将图像中的线条细节滤除)存在上述问题。边缘保持滤波器是在综合考虑了均值滤波器和中值滤波器的优缺点后发展起来的。
特点:滤波器在除噪声脉冲的同时,又不至于使图像边缘十分模糊。
过程:分别计算[i,j]的左上角子邻域、左下角子邻域、右上角子邻域、右下角子邻域的灰度分布均匀度V;然后取最小均匀度对应区域的均值作为该像素点的新灰度值。分布越均匀,均匀度V值越小。v=<(f(x, y) - f_(x, y))^2

2018-07-27 23:56:07 fanzonghao 阅读数 219

一般来说,图像的能量主要集中在其低频部分,噪声所在的频段主要在高频段,同时图像中的细节信息也主要集中在其高频部分,因此,如何去掉高频干扰同时又保持细节信息是关键。为了去除噪声,有必要对图像进行平滑,可以采用低通滤波的方法去除高频干扰。图像平滑包括空域法和频域法两大类。在空域法中,图像平滑的常用方法是采用均值滤波或中值滤波。对于均值滤波,它是用一个有奇数点的滑动窗口在图像上滑动,将窗口中心点对应的图像像素点的灰度值用窗口内的各个点的灰度值的平均值代替,如果滑动窗口规定了取均值过程中窗口各个像素点所占的权重,也就是各个像素点的系数,这时候就称为加权均值滤波;对于中值滤波,对应的像素点的灰度值用窗口内的中间值代替。 

一,平滑均值滤波,奇数尺寸,参数和为1,缺点没有去除噪声,反而让图像模糊,代码,

"""
平滑滤波
"""
def average_filter():
    img=cv2.imread('./data/opencv_logo.png')
    kernel=np.ones(shape=(5,5),dtype=np.float32)/25
    dst=cv2.filter2D(src=img,ddepth=-1,kernel=kernel)
    plt.subplot(121)
    plt.imshow(img)
    plt.title('original')
    plt.axis('off')
    plt.subplot(122)
    plt.imshow(dst)
    plt.title('Average')
    plt.axis('off')
    plt.show()

打印结果:

二,平滑高斯滤波,模拟人眼关注中心区域,有效去除高斯噪声

"""
高斯滤波
"""
def image_gauss():
    img = cv2.imread('./data/img.png')

    gauss_img = cv2.GaussianBlur(img, (7, 7),0)
    plt.subplot(121)
    plt.imshow(img)
    plt.title('original')
    plt.axis('off')
    plt.subplot(122)
    plt.imshow(gauss_img)
    plt.title('gauss_img')
    plt.axis('off')
    plt.show()

打印结果:

三,中值滤波,卷积域内的像素值从小到大排序,取中间值作为卷积输出,有效去除椒盐噪声

"""
中值滤波
"""
def image_median():
    img = cv2.imread('./data/img1.png')

    median_img = cv2.medianBlur(img,5)
    plt.subplot(121)
    plt.imshow(img)
    plt.title('original')
    plt.axis('off')
    plt.subplot(122)
    plt.imshow(median_img)
    plt.title('medians_img')
    plt.axis('off')
    plt.show()

打印结果:

四,Sobel算子

def Sobel(src, ddepth, dx, dy, dst=None, ksize=None, scale=None, delta=None, borderType=None)

Sobel算子依然是一种过滤器,只是其是带有方向的。

前四个是必须的参数:

  • 第一个参数是需要处理的图像;
  • 第二个参数是图像的深度,-1表示采用的是与原图像相同的深度。目标图像的深度必须大于等于原图像的深度;
  • dx和dy表示的是求导的阶数,0表示这个方向上没有求导,一般为0、1、2。 
img=cv2.imread('img.jpg')
print(img.shape)
gray=cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)

#[[-1,0,1],
# [-2,0,2],
# [-1,0,1]]
solber_x=cv2.Sobel(gray,cv2.CV_64F,1,0,ksize=3)
solber_x=cv2.convertScaleAbs(solber_x)
cv2.imshow('solber_x',solber_x)
cv2.waitKey(0)

#[[-1,-2,-1],
# [0,0,0],
# [1,2,1]]
solber_y=cv2.Sobel(gray,cv2.CV_64F,0,1,ksize=3)
solber_y=cv2.convertScaleAbs(solber_y)
cv2.imshow('solber_y',solber_y)
cv2.waitKey(0)
solber_xy=cv2.addWeighted(solber_x,1,solber_y,1,0)
cv2.imshow('solber_xy',solber_xy)
cv2.waitKey(0)

五,傅里叶变换用来分析各种滤波器的频率特性,图片中的边缘点和噪声可看成是高频分量,因为变化明显,没有很大变化的就看成低频分量

https://docs.opencv.org/master/de/dbc/tutorial_py_fourier_transform.html

"""
傅利叶变换
"""
def FFT():
    img = cv2.imread('./data/img3.png', 0)
    f = np.fft.fft2(img)
    fshift = np.fft.fftshift(f)
    magnitude_spectrum = 20 * np.log(np.abs(fshift))
    plt.subplot(121), plt.imshow(img, cmap='gray')
    plt.title('Input Image'), plt.xticks([]), plt.yticks([])
    plt.subplot(122), plt.imshow(magnitude_spectrum, cmap='gray')
    plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([])
    plt.show()

在中间部分更亮,表明低频分量多

用60×60窗口去掉低频分量

def FFT():
    img = cv2.imread('./data/img3.png', 0)
    f = np.fft.fft2(img)
    fshift = np.fft.fftshift(f)
    # magnitude_spectrum = 20 * np.log(np.abs(fshift))
    # plt.subplot(121), plt.imshow(img, cmap='gray')
    # plt.title('Input Image'), plt.xticks([]), plt.yticks([])
    # plt.subplot(122), plt.imshow(magnitude_spectrum, cmap='gray')
    # plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([])
    # plt.show()

    rows, cols = img.shape
    crow, ccol = int(rows / 2), int(cols / 2)
    fshift[crow - 30:crow + 30, ccol - 30:ccol + 30] = 0
    f_ishift = np.fft.ifftshift(fshift)
    img_back = np.fft.ifft2(f_ishift)
    img_back = np.abs(img_back)
    plt.subplot(131), plt.imshow(img, cmap='gray')
    plt.title('Input Image'), plt.xticks([]), plt.yticks([])
    plt.subplot(132), plt.imshow(img_back, cmap='gray')
    plt.title('Image after HPF'), plt.xticks([]), plt.yticks([])
    plt.subplot(133), plt.imshow(img_back)
    plt.title('Result in JET'), plt.xticks([]), plt.yticks([])
    plt.show()

可见只保留了人的边缘信息,证明了中间亮的那些部分是低频分量。

六,Laplacian为啥是高通滤波器

def laplace_high_pass():
    # simple averaging filter without scaling parameter
    mean_filter = np.ones((3,3))
    # creating a gaussian filter
    x = cv2.getGaussianKernel(5,10)
    gaussian = x*x.T
    # different edge detecting filters
    # scharr in x-direction
    scharr = np.array([[-3, 0, 3],
                       [-10,0,10],
                       [-3, 0, 3]])
    # sobel in x direction
    sobel_x= np.array([[-1, 0, 1],
                       [-2, 0, 2],
                       [-1, 0, 1]])
    # sobel in y direction
    sobel_y= np.array([[-1,-2,-1],
                       [0, 0, 0],
                       [1, 2, 1]])
    # laplacian
    laplacian=np.array([[0, 1, 0],
                        [1,-4, 1],
                        [0, 1, 0]])
    filters = [mean_filter, gaussian, laplacian, sobel_x, sobel_y, scharr]
    filter_name = ['mean_filter', 'gaussian','laplacian', 'sobel_x', \
                    'sobel_y', 'scharr_x']
    fft_filters = [np.fft.fft2(x) for x in filters]
    fft_shift = [np.fft.fftshift(y) for y in fft_filters]
    mag_spectrum = [np.log(np.abs(z)+1) for z in fft_shift]
    for i in range(6):
        plt.subplot(2,3,i+1),plt.imshow(mag_spectrum[i],cmap = 'gray')
        plt.title(filter_name[i]), plt.xticks([]), plt.yticks([])
    plt.show()

打印结果:

中间有白色的部分代表是低通滤波器,中间有黑色的部分代表是高通滤波器。

七,图像锐化

图像的边缘信息在图像风险和人的视觉中都是非常重要的,物体的边缘是以图像局部特性不连续的形式出现的。前面介绍的图像滤波对于消除噪声是有益的,但往往使图像中的边界、轮廓变的模糊,为了减少这类不利效果的影响,就需要利用图像锐化技术,使图像的边缘变得更加鲜明。图像锐化处理的目的就是为了使图像的边缘、轮廓线以及图像的细节变得清晰,经过平滑处理后的图像变得模糊的根本原因是因为图像的像素受到了平均或积分,因此对其进行逆运算(如微分运算)就可以使图像变得清晰。从频率域来考虑,图像模糊的实质是因为其高频分量被衰减,因此可以用高通滤波器使图像清晰。

八.例子 提取条形码

1.利用梯度操作是如何检测出图片的条形码;

2.利用均值滤波作用于梯度图片,平滑图片中的高频噪声;

3.二值化;

4.利用函数cv2.getStructuringElement构造一个矩形核做闭运算,这个核的宽度大于高度,因此允许我们缩小条形码垂直条带之间的间隙;

5.腐蚀,膨胀去掉大部分独立斑点;

6.找出最大轮廓,提取。

import cv2
import matplotlib.pyplot as plt
import numpy as np
import imutils
path='./barcode.png'
image = cv2.imread(path)
image_h, image_w,_=image.shape
print('======opencv read data type========')
print(image.dtype)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# # 计算图片x和y方向的Scharr梯度大小
ddepth = cv2.CV_32F if imutils.is_cv2() else cv2.CV_32F
gradX = cv2.Sobel(gray, ddepth=ddepth , dx=1, dy=0, ksize=-1)
print('gradX.dtype:',gradX.dtype)

# # #debug
# gradX = cv2.convertScaleAbs(gradX)
# print(gradX.dtype)
# cv2.imshow('gradX',gradX)
# cv2.waitKey(0)

gradY = cv2.Sobel(gray, ddepth=ddepth , dx=0, dy=1, ksize=-1)
# # #debug
# gradY = cv2.convertScaleAbs(gradY)
# print(gradY.dtype)
# cv2.imshow('gradY',gradY)
# cv2.waitKey(0)

# 用x方向的梯度减去y方向的梯度
gradient = cv2.subtract(gradX,gradY)
# cv2.imshow('gradient1',gradient)
# cv2.waitKey(0)

#转回uint8
gradient = cv2.convertScaleAbs(gradient)
# print(gradient.shape)
# print(gradient.dtype)
# cv2.imshow('gradient2',gradient)
# cv2.waitKey(0)

# blur and threshold the image
blurred = cv2.blur(gradient, (9, 9))
thresh= cv2.threshold(blurred, 225, 255, cv2.THRESH_BINARY)[1]

# cv2.imshow('thresh:',thresh)
# cv2.waitKey(0)
# construct a closing kernel and apply it to the thresholded image
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (21, 7))
closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# cv2.imshow('closed:',closed)
# cv2.waitKey(0)
# perform a series of erosions and dilations
closed = cv2.erode(closed, None, iterations = 4)
closed = cv2.dilate(closed, None, iterations = 4)
# cv2.imshow('close:',closed)
# cv2.waitKey(0)

# find the contours in the thresholded image, then sort the contours
# by their area, keeping only the largest one
cnts = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# cnts = cnts[0]
c = sorted(cnts, key=cv2.contourArea, reverse=True)
c = np.squeeze(c[0])
# plt.plot(c[:, 0], c[:, 1])
# plt.show()
mask = np.zeros((image_h, image_w, 3))
dummy_mask = cv2.drawContours(mask, [c], 0, (255, 255, 255), thickness=cv2.FILLED)
cv2.imshow('dummy_mask',dummy_mask)
cv2.waitKey(0)

image_bar=(image*(np.array(dummy_mask/255).astype(np.uint8)))
cv2.imshow('image_bar',image_bar)
cv2.waitKey(0)

      

用下面这个是提取出轮廓的外接多边形然后框出来

rect=cv2.minAreaRect(c)#get center xy and w h
box = cv2.boxPoints(rect) # cv2.boxPoints(rect) for OpenCV 3.x 获取最小外接矩形的4个顶点坐标
box = np.int0(box)
print(box)
cv2.drawContours(image, [box], 0, (0, 255, 0), 3)

cv2.imshow('image',image)
cv2.waitKey(0)

九.倾斜矫正

#from imutils.perspective import four_point_transform
#import imutils
import cv2
import numpy as np
from matplotlib import pyplot as plt
import math


def Get_Outline(input_dir):
    image = cv2.imread(input_dir)
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    blurred = cv2.GaussianBlur(gray, (5, 5), 0)
    edged = cv2.Canny(blurred, 75, 200)
    return image, gray, edged


def Get_cnt(edged):
    cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    cnts = cnts[0]  # if imutils.is_cv2() else cnts[1]

    docCnt = None

    if len(cnts) > 0:
        cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
        for c in cnts:
            peri = cv2.arcLength(c, True)  # 轮廓按大小降序排序
            approx = cv2.approxPolyDP(c, 0.02 * peri, True)  # 获取近似的轮廓
            if len(approx) == 4:  # 近似轮廓有四个顶点
                docCnt = approx
                break

    return docCnt


def calculate_distance(point1, point2):
    d_x = point1[0] - point2[0]
    d_y = point1[1] - point2[1]
    distance = math.sqrt(d_x ** 2 + d_y ** 2)
    return distance


if __name__ == "__main__":
    input_dir = "gongjiaoka.png"
    image, gray, edged = Get_Outline(input_dir)
    docCnt = Get_cnt(edged)
    # print(docCnt)

    print(docCnt.reshape(4, 2))
    # result_img = four_point_transform(image, docCnt.reshape(4,2)) # 对原始图像进行四点透视变换
    # 改变变换的模式 公交卡的比例是16:9
    pts1 = np.float32(docCnt.reshape(4, 2))
    # 加入一个判断,对不同宽高采用不同的系数
    p = docCnt.reshape(4, 2)
    # plt.plot(p[:,0],p[:,1])
    # plt.show()

    # 确定长短边
    if calculate_distance(p[0], p[1]) < calculate_distance(p[0], p[3]):
        pts2 = np.float32([[0, 0], [0, 180], [320, 180], [320, 0]])
        M = cv2.getPerspectiveTransform(pts1, pts2)
        #求仿射变换矩阵
        edged_rotate = cv2.warpPerspective(edged, M, (320, 180))
        image_rotate = cv2.warpPerspective(image, M, (320, 180))


    else:
        pts2 = np.float32([[0, 0], [0, 320], [180, 320], [180, 0]])
        #求仿射变换矩阵    
        M = cv2.getPerspectiveTransform(pts1, pts2)
        edged_rotate = cv2.warpPerspective(edged, M, (180, 320))
        image_rotate = cv2.warpPerspective(image, M, (180, 320))

    cv2.imwrite('image_rotate.png',image_rotate)
    # print(result_img.shape)
    # -------画点----------
    for point in docCnt.reshape(4, 2):
        cv2.circle(image, tuple(point), 3, (0, 0, 255), 2)
    # # --------------
    cv2.imshow("original", image)
    # cv2.imshow("gray", gray)
    cv2.imshow("edged", edged)
    cv2.imshow("edged_rotate", edged_rotate)
    cv2.imshow("result_img", image_rotate)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

十.求物体尺寸

from scipy.spatial import distance as dist
from imutils import perspective
from imutils import contours
import numpy as np
import argparse
import imutils
import cv2

def midpoint(ptA, ptB):
	return ((ptA[0] + ptB[0]) * 0.5, (ptA[1] + ptB[1]) * 0.5)

path='./img/example_02.png'
#硬币长度0.955inch
WIDTH=0.955
# load the image, convert it to grayscale, and blur it slightly
image = cv2.imread(path)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)

# cv2.imwrite('gray.jpg',gray)

edged = cv2.Canny(gray, 50, 100)
edged = cv2.dilate(edged, None, iterations=1)
edged = cv2.erode(edged, None, iterations=1)

# find contours in the edge map
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
# print(len(cnts))
# print(cnts[0].shape)
pixelsPerMetric = None
orig = image.copy()

for c in cnts:
	if cv2.contourArea(c) < 100:
		continue

	box = cv2.minAreaRect(c)
	box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
	box = np.array(box, dtype="int")
	print('box:',box)
	box = perspective.order_points(box)
	cv2.drawContours(orig, [box.astype("int")], -1, (0, 255, 0), 2)

	for (x, y) in box:
		cv2.circle(orig, (int(x), int(y)), 5, (0, 0, 255), -1)

	(tl, tr, br, bl) = box
	(tltrX, tltrY) = midpoint(tl, tr)
	(blbrX, blbrY) = midpoint(bl, br)

	(tlblX, tlblY) = midpoint(tl, bl)
	(trbrX, trbrY) = midpoint(tr, br)

	# draw the midpoints on the image
	cv2.circle(orig, (int(tltrX), int(tltrY)), 5, (255, 0, 0), -1)
	cv2.circle(orig, (int(blbrX), int(blbrY)), 5, (255, 0, 0), -1)
	cv2.circle(orig, (int(tlblX), int(tlblY)), 5, (255, 0, 0), -1)
	cv2.circle(orig, (int(trbrX), int(trbrY)), 5, (255, 0, 0), -1)

	# draw lines between the midpoints
	cv2.line(orig, (int(tltrX), int(tltrY)), (int(blbrX), int(blbrY)),
		(255, 0, 255), 2)
	cv2.line(orig, (int(tlblX), int(tlblY)), (int(trbrX), int(trbrY)),
		(255, 0, 255), 2)

	# compute the Euclidean distance between the midpoints
	dA = dist.euclidean((tltrX, tltrY), (blbrX, blbrY))
	dB = dist.euclidean((tlblX, tlblY), (trbrX, trbrY))

	# if the pixels per metric has not been initialized, then
	# compute it as the ratio of pixels to supplied metric
	# (in this case, inches)
	if pixelsPerMetric is None:
		pixelsPerMetric = dB / WIDTH

	# compute the size of the object
	dimA = dA / pixelsPerMetric
	dimB = dB / pixelsPerMetric

	# draw the object sizes on the image
	cv2.putText(orig, "{:.1f}in".format(dimA),
		(int(tltrX - 15), int(tltrY - 10)), cv2.FONT_HERSHEY_SIMPLEX,
		0.65, (255, 255, 255), 2)
	cv2.putText(orig, "{:.1f}in".format(dimB),
		(int(trbrX + 10), int(trbrY)), cv2.FONT_HERSHEY_SIMPLEX,
		0.65, (255, 255, 255), 2)
cv2.imwrite('orig.jpg', orig)

2017-10-28 18:45:23 qq_30356613 阅读数 16525

滤波器作为图像处理课程的重要内容,大致可分为两类,空域滤波器和频率域滤波器。本文主要介绍常用的四种滤波器:中值滤波器、均值滤波器、高斯滤波器、双边滤波器,并基于opencv做出实现。空域的滤波器一般可以通过模板对原图像进行卷积进行,卷积的相关知识请自行学习。

理论知识:

线性滤波器表达公式:,其中均值滤波器和高斯滤波器属于线性滤波器,首先看这两种滤波器

均值滤波器:

模板:

从待处理图像首元素开始用模板对原始图像进行卷积,均值滤波直观地理解就是用相邻元素灰度值的平均值代替该元素的灰度值。

高斯滤波器:

模板:通过高斯内核函数产生的

高斯内核函数:

例如3*3的高斯内核模板:


中值滤波:同样是空间域的滤波,主题思想是取相邻像素的点,然后对相邻像素的点进行排序,取中点的灰度值作为该像素点的灰度值。

双边滤波:


C++代码实现:

static void exchange(int& a, int& b)
{	
	int t = 0;
	t = a;
	a = b;
	b = t;
}

static void bubble_sort(int* K, int lenth)
{
	for (int i = 0; i < lenth; i++)
		for (int j = i + 1; j < lenth; j++)
		{
			if (K[i]>K[j])
				exchange(K[i], K[j]);
		}
}
///产生二维的高斯内核
static cv::Mat generate_gassian_kernel(double u, double sigma, cv::Size size)
{
	int width = size.width;
	int height = size.height;
	cv::Mat gassian_kernel(cv::Size(width, height), CV_64FC1);
	double sum = 0;
	double sum_sum = 0;
	for (int i = 0; i < width; i++)
		for (int j = 0; j < height; j++)
		{
			sum = 1.0 / 2.0 / CV_PI / sigma / sigma * exp(-1.0 * ((i - width / 2)*(i - width / 2) + (j - width / 2)*(j - width / 2)) / 2.0 / sigma / sigma);
			sum_sum += sum;
			gassian_kernel.ptr<double>(i)[j] = sum;
		}
	for (int i = 0; i < width; i++)
		for (int j = 0; j < height; j++)
		{
			gassian_kernel.ptr<double>(i)[j] /= sum_sum;
		}
	return gassian_kernel;
}
///均值滤波
void lmt_main_blur(cv::Mat& img_in, cv::Mat& img_out, int kernel_size)
{
	img_out = img_in.clone();
	cv::Mat mat1;
	cv::copyMakeBorder(img_in, mat1, kernel_size, kernel_size, kernel_size, kernel_size, cv::BORDER_REPLICATE);

	int cols = mat1.cols;
	int rows = mat1.rows;
	int channels = img_out.channels();
	const uchar* const pt = mat1.ptr<uchar>(0);
	uchar* pt_out = img_out.ptr<uchar>(0);

	for (int i = kernel_size; i < rows - kernel_size; i++)
	{
		for (int j = kernel_size; j < cols - kernel_size; j++)
		{
			if (channels == 1)
			{
				long long int sum_pixel = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						sum_pixel += pt[(i + m)*cols + (j + n)];
					}
				img_out.ptr<uchar>(i - kernel_size)[j - kernel_size] = (double)sum_pixel / (kernel_size*kernel_size * 4);
			}
			else if (channels == 3)
			{
				long long int sum_pixel = 0;
				long long int sum_pixel1 = 0;
				long long int sum_pixel2 = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						sum_pixel += pt[((i + m)*cols + (j + n))*channels + 0];
						sum_pixel1 += pt[((i + m)*cols + (j + n))*channels + 1];
						sum_pixel2 += pt[((i + m)*cols + (j + n))*channels + 2];
					}
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 0] = (double)sum_pixel / (double)(kernel_size*kernel_size * 4);
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 1] = (double)sum_pixel1 / (double)(kernel_size*kernel_size * 4);
				img_out.ptr<uchar>(i - kernel_size)[(j - kernel_size)*channels + 2] = (double)sum_pixel2 / (double)(kernel_size*kernel_size * 4);
			}
		}
	}

}
///中值滤波
void lmt_median_blur(cv::Mat& img_in, cv::Mat& img_out, int kernel_size)
{
	img_out = img_in.clone();
	cv::Mat mat1;
	cv::copyMakeBorder(img_in, mat1, kernel_size, kernel_size, kernel_size, kernel_size, cv::BORDER_REPLICATE);

	int cols = mat1.cols;
	int rows = mat1.rows;
	int channels = img_out.channels();

	cv::Mat mat[3];
	cv::Mat mat_out[3];
	cv::split(mat1, mat);
	cv::split(img_out, mat_out);
	for (int k = 0; k < 3; k++)
	{
		const uchar* const pt = mat[k].ptr<uchar>(0);
		uchar* pt_out = mat_out[k].ptr<uchar>(0);
		for (int i = kernel_size; i < rows - kernel_size; i++)
		{
			for (int j = kernel_size; j < cols - kernel_size; j++)
			{
				long long int sum_pixel = 0;
				int* K = new int[kernel_size*kernel_size * 4];
				int ker_num = 0;
				for (int m = -1 * kernel_size; m < kernel_size; m++)
					for (int n = -1 * kernel_size; n < kernel_size; n++)
					{
						K[ker_num] = pt[(i + m)*cols + (j + n)];
						ker_num++;
					}
				bubble_sort(K, ker_num);
				mat_out[k].ptr<uchar>(i - kernel_size)[j - kernel_size] = K[ker_num / 2];
			}
		}
	}
	cv::merge(mat_out, 3, img_out);
}
///高斯滤波
void lmt_gaussian_blur(cv::Mat& img_src, cv::Mat& img_dst, cv::Size kernel_size)
{
	img_dst = cv::Mat(cv::Size(img_src.cols, img_src.rows), img_src.type());
	int cols = img_src.cols;
	int rows = img_src.rows;
	int channels = img_src.channels();
	cv::Mat gassian_kernel = generate_gassian_kernel(0, 1, kernel_size);
	int width = kernel_size.width / 2;
	int height = kernel_size.height / 2;
	for (int i = height; i < rows - height; i++)
	{
		for (int j = width; j < cols - width; j++)
		{
			for (int k = 0; k < channels; k++)
			{
				double sum = 0.0;
				for (int m = -height; m <= height; m++)
				{
					for (int n = -width; n <= width; n++)
					{
						sum += (double)(img_src.ptr<uchar>(i + m)[(j + n)*channels + k]) * gassian_kernel.ptr<double>(height + m)[width + n];
					}
				}
				if (sum > 255.0)
					sum = 255;
				if (sum < 0.0)
					sum = 0;
				img_dst.ptr<uchar>(i)[j*channels + k] = (uchar)sum;
			}
		}
	}

	
}
///双边滤波
void lmt_bilateral_filter(cv::Mat& img_in, cv::Mat& img_out, const int r, double sigma_d, double sigma_r)
{
	int i, j, m, n, k;
	int nx = img_in.cols, ny = img_in.rows, m_nChannels = img_in.channels();
	const int w_filter = 2 * r + 1; // 滤波器边长  

	double gaussian_d_coeff = -0.5 / (sigma_d * sigma_d);
	double gaussian_r_coeff = -0.5 / (sigma_r * sigma_r);
	double  **d_metrix = new double *[w_filter];
	for (int i = 0; i < w_filter; ++i)
		d_metrix[i] = new double[w_filter];
	
	double r_metrix[256];  // similarity weight  
	img_out = cv::Mat(img_in.size(),img_in.type());
	uchar* m_imgData = img_in.ptr<uchar>(0);
	uchar* m_img_outData = img_out.ptr<uchar>(0);
	// copy the original image  
	double* img_tmp = new double[m_nChannels * nx * ny];
	for (i = 0; i < ny; i++)
		for (j = 0; j < nx; j++)
			for (k = 0; k < m_nChannels; k++)
			{
				img_tmp[i * m_nChannels * nx + m_nChannels * j + k] = m_imgData[i * m_nChannels * nx + m_nChannels * j + k];
			}

	// compute spatial weight  
	for (i = -r; i <= r; i++)
		for (j = -r; j <= r; j++)
		{
			int x = j + r;
			int y = i + r;

			d_metrix[y][x] = exp((i * i + j * j) * gaussian_d_coeff);
		}

	// compute similarity weight  
	for (i = 0; i < 256; i++)
	{
		r_metrix[i] = exp(i * i * gaussian_r_coeff);
	}

	// bilateral filter  
	for (i = 0; i < ny; i++)
		for (j = 0; j < nx; j++)
		{
			for (k = 0; k < m_nChannels; k++)
			{
				double weight_sum, pixcel_sum;
				weight_sum = pixcel_sum = 0.0;

				for (m = -r; m <= r; m++)
					for (n = -r; n <= r; n++)
					{
						if (m*m + n*n > r*r) continue;

						int x_tmp = j + n;
						int y_tmp = i + m;

						x_tmp = x_tmp < 0 ? 0 : x_tmp;
						x_tmp = x_tmp > nx - 1 ? nx - 1 : x_tmp;   // 边界处理,replicate  
						y_tmp = y_tmp < 0 ? 0 : y_tmp;
						y_tmp = y_tmp > ny - 1 ? ny - 1 : y_tmp;

						int pixcel_dif = (int)abs(img_tmp[y_tmp * m_nChannels * nx + m_nChannels * x_tmp + k] - img_tmp[i * m_nChannels * nx + m_nChannels * j + k]);
						double weight_tmp = d_metrix[m + r][n + r] * r_metrix[pixcel_dif];  // 复合权重  

						pixcel_sum += img_tmp[y_tmp * m_nChannels * nx + m_nChannels * x_tmp + k] * weight_tmp;
						weight_sum += weight_tmp;
					}

				pixcel_sum = pixcel_sum / weight_sum;
				m_img_outData[i * m_nChannels * nx + m_nChannels * j + k] = (uchar)pixcel_sum;

			} // 一个通道  

		} // END ALL LOOP  
	for (i = 0; i < w_filter; i++)
		delete[] d_metrix[i];
	delete[] d_metrix;
}

Opencv API函数实现:

opencv相关函数简介:

双边滤波函数:bilateralFilter(InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace,int borderType=BORDER_DEFAULT )

   src待滤波图像

   dst滤波后图像

   d滤波器半径

   sigmaColor滤波器值域的sigma

   sigmaSpace滤波器空间域的sigma

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

均值滤波函数:blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), intborderType=BORDER_DEFAULT );

   src待滤波图像

   dst滤波后图像

   ksize 均值滤波器的大小

   anchor均值滤波器的锚点也就是模板移动点

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

高斯滤波函数:GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0,int borderType=BORDER_DEFAULT );

   src待滤波图像

   dst滤波后图像

   ksize 高斯滤波器的大小

   sigmaX 高斯滤波器的x方向的滤波器高斯sigma

   sigmaY 高斯滤波器的y方向的滤波器高斯sigma

   borderType边缘填充方式 BORDER_REPLICATE BORDER_REFLECT BORDER_DEFAULT BORDER_REFLECT_101BORDER_TRANSPARENT BORDER_ISOLATED

 

中值滤波函数:medianBlur(InputArray src, OutputArray dst, int ksize );

    src待滤波图像

    dst滤波后图像

    ksize 中值滤波器的大小

函数演示:

void bilateral_filter_show(void)
{
	cv::Mat mat1 = cv::imread("F:\\CVlibrary\\obama.jpg", CV_LOAD_IMAGE_GRAYSCALE); //灰度图加载进来,BGR->HSV 然后取H参数
	if (mat1.empty())
		return;
	cv::imshow("原图像", mat1); 
	cv::Mat src = cv::imread("F:\\CVlibrary\\obama.jpg");
	cv::imshow("原始彩色图像", src);
	std::cout << "channel = " << mat1.channels() << std::endl;
	
	cv::Mat mat3;
	cv::bilateralFilter(src, mat3, 5, 50, 50,cv::BORDER_DEFAULT);
	cv::imshow("opencv给出的双边滤波器", mat3);
	cv::Mat mat4;
	cv::blur(src, mat4, cv::Size(3, 3));
	cv::imshow("均值滤波", mat4);
	cv::Mat mat5;
	cv::GaussianBlur(src, mat5, cv::Size(5, 5), 1,1);
	cv::imshow("高斯滤波器", mat5);
	cv::Mat mat6;
	cv::medianBlur(src, mat6, 3);
	cv::imshow("中值滤波", mat6); 
	cv::Mat mat7;
	lmt_gaussian_blur(src, mat7, cv::Size(5, 5));
	cv::imshow("my gaussian image",mat7);

	cv::waitKey(0);
}







没有更多推荐了,返回首页