• ## 图像对比度计算

万次阅读 热门讨论 2014-11-03 16:19:27
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%...%计算图像对比度 %方法一：中心像素灰度值与周围4近邻像素灰度值之差的平方之和，除以以上平方项的个数。 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


matlab中求解方式：
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%计算图像对比度
%方法一：中心像素灰度值与周围4近邻像素灰度值之差的平方之和，除以以上平方项的个数。
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[m,n] = size(f);%求原始图像的行数m和列数n
%扩展只是对原始图像的周边像素进行复制的方法进行
[r,c] = size(g);%求扩展后图像的行数r和列数c
g = double(g);  %把扩展后图像转变成双精度浮点数
k = 0;  %定义一数值k，初始值为0
fori=2:r-1
forj=2:c-1
k = k+(g(i,j-1)-g(i,j))^2+(g(i-1,j)-g(i,j))^2+(g(i,j+1)-g(i,j))^2+(g(i+1,j)-g(i,j))^2;
end
end
cg = k/(4*(m-2)*(n-2)+3*(2*(m-2)+2*(n-2))+4*2);%求原始图像对比度
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%计算图像对比度
%方法二：中心像素灰度值与周围8近邻像素灰度值之差的平方之和，除以以上之差的个数。
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[m,n] = size(f);%求原始图像的行数m和列数n
%扩展只是对原始图像的周边像素进行复制的方法进行
[r,c] = size(g);%求扩展后图像的行数r和列数c
g = double(g);  %把扩展后图像转变成双精度浮点数
k=0;  %定义一数值k，初始值为0
fori=2:r-1
forj=2:c-1
k = k+(g(i,j-1)-g(i,j))^2+(g(i-1,j)-g(i,j))^2+(g(i,j+1)-g(i,j))^2+(g(i+1,j)-g(i,j))^2+...
(g(i-1,j-1)-g(i,j))^2+(g(i-1,j+1)-g(i,j))^2+(g(i+1,j-1)-g(i,j))^2+(g(i+1,j+1)-g(i,j))^2;
end
end
cg = k/(8*(m-2)*(n-2)+6*(2*(m-2)+2*(n-2))+4*3);%求原始图像对比度


展开全文
• 基于matlab，读取图像文件并，并计算图像对比度计算公式采用：各中心像素灰度值与周围8近邻像素灰度值之差的平方之和再除以差的个数。 注：直接运行，选取路径即可输出计算结果，十分方便。适用于大量图片待...
• 用MATLAB计算图像对比度的程序。包括4邻域和8邻域两种方法。还有计算对比度的算法文档。
• 基于matlab，读取图像文件并，并计算图像对比度计算公式采用：各中心像素灰度值与周围8近邻像素灰度值之差的平方之和再除以差的个数。 注：直接运行，选取路径即可输出计算结果，十分方便。适用于大量图片待...
• 图像对比度理论知识 1 定义 对比度：通俗地讲就是亮暗的拉伸对比程度，通常表现了图像画质的清晰程度。对比度的计算公式如下： 2 计算案例 解释一下最后的48是怎么来的：其实就是总的平方次数 二 代码实现 from...
一 图像对比度理论知识
1 定义
对比度：通俗地讲就是亮暗的拉伸对比程度，通常表现了图像画质的清晰程度。对比度的计算公式如下：

2 计算案例

解释：

每个小括号的数据是怎么来的？按四近邻计算，比如第一个小括号：以第一行第一列为中心，上下左右分别与这个中心值相减再平方，然后加在一起，即：（2-1）2+（3-1）2；第二个小括号即：（1-3）2+（9-3）2+（1-3）2
最后的48是怎么来的：其实就是总的平方次个数

二 代码实现
'''上面案例和下面代码都是使用四近邻方式计算'''
from cv2 import cv2
import numpy as np
def contrast(img0):
img1 = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY) #彩色转为灰度图片
m, n = img1.shape
#图片矩阵向外扩展一个像素
img1_ext = cv2.copyMakeBorder(img1,1,1,1,1,cv2.BORDER_REPLICATE) / 1.0   # 除以1.0的目的是uint8转为float型，便于后续计算
rows_ext,cols_ext = img1_ext.shape
b = 0.0
for i in range(1,rows_ext-1):
for j in range(1,cols_ext-1):
b += ((img1_ext[i,j]-img1_ext[i,j+1])**2 + (img1_ext[i,j]-img1_ext[i,j-1])**2 +
(img1_ext[i,j]-img1_ext[i+1,j])**2 + (img1_ext[i,j]-img1_ext[i-1,j])**2)

cg = b/(4*(m-2)*(n-2)+3*(2*(m-2)+2*(n-2))+2*4) #对应上面48的计算公式
print(cg)

contrast(img0)
contrast(img1)
contrast(img2)
contrast(img3)

结果如下（供参考）：
13.12
15.19
16.24
18.21
结论：图片越清晰，对比度越大
使用如下图片（因为下面图片大小被压缩，所以计算结果可能不一致，但大小顺序一致）
===


展开全文
• 1，图像标准差；2，图像对比度；3，图像清晰度 上述三个问题的Matlab程序
• 关于图像分割的区域对比度计算: 计算区域的中心与质心 区域颜色相似度距离的高斯权重分布 生成新区域 统计区域边界的像素数量 区域分割 #%%cython --cplus --annotate import numpy as np cimport cython cimport...
文章目录代码一代码二测试代码
这是基于区域的对比度的显著性检测代码，【关于原文请查看】
关于图像分割的区域对比度计算:

计算区域的中心与质心。关于计算前的分割区域
区域颜色相似度距离的高斯权重分布

生成新区域
统计区域边界的像素数量
形成分割区域

开发环境win10,Visual studio 2015
代码一
#%%cython --cplus --annotate
import numpy as np
cimport cython
cimport numpy as np
from cpython cimport array
from libc.math cimport pow
from libc.math cimport sqrt
from libc.math cimport exp
from libc.math cimport fabs
from libcpp.vector cimport vector
from scipy.spatial.distance import pdist,squareform

@cython.boundscheck(False)
@cython.wraparound(False)
cpdef Build_Regions_Contrast(int regNum, np.ndarray[np.int32_t, ndim=2] regIdx1i,
int[:,:] colorIdx1i, float[:,:,:]color3fv, float sigmaDist,float ratio,float thr):
cdef:
int height = regIdx1i.shape[0]
int width = regIdx1i.shape[1]
int colorNum = color3fv.shape[1]
float cx = <float>width / 2.0
float cy = <float>height / 2.0
Py_ssize_t x = 0, y = 0, i=0, j=0, m=0, n=0, yi=0, xi=0,iii=0

pixNum_np = np.zeros(regNum,dtype=np.int64)
pixNum_np = np.bincount(regIdx1i.reshape(1, width * height)[0])

ybNo_np = np.zeros(regNum, dtype=np.float64)
regs_mX = np.zeros(regNum, dtype=np.float64)
regs_mY = np.zeros(regNum, dtype=np.float64)
regs_vX = np.zeros(regNum, dtype=np.float64)
regs_vY = np.zeros(regNum, dtype=np.float64)

freIdx_f64 = np.zeros((regNum, colorNum), dtype=np.float64)
regColor_np = np.zeros((regNum, colorNum), dtype=np.int32)
tile_pixNum = np.zeros((regNum, colorNum), dtype=np.int64)
regs_np = np.zeros((regNum, 4), dtype=np.float64)

cdef int [:,::1]regColor_view = regColor_np
cdef double[:,::1]regs_view = regs_np

with nogil:
for y in range(height):
for x in range(width):
regs_view[regIdx1i[y, x], 0] = fabs(x - cx)  # ad2c_0
regs_view[regIdx1i[y, x], 1] = fabs(y - cy)  # ad2c_1
regs_view[regIdx1i[y, x], 2] += x            # region center x coordinate
regs_view[regIdx1i[y, x], 3] += y            # region center y coordinate
regColor_view[regIdx1i[y, x], colorIdx1i[y, x]] += 1

regs_np[:, 0] = np.divide(regs_np[:, 0], pixNum_np * width)
regs_np[:, 1] = np.divide(regs_np[:, 1], pixNum_np * height)

regs_mX = np.divide(regs_np[:, 2], pixNum_np)
regs_mY = np.divide(regs_np[:, 3], pixNum_np)

regs_np[:, 2] = np.divide(regs_mX, width)
regs_np[:, 3] = np.divide(regs_mY, height)

freIdx_f64 = regColor_np.astype(np.float64)
tile_pixNum = np.tile(pixNum_np[:, np.newaxis], (1, colorNum))
freIdx_f64 = np.divide(freIdx_f64, tile_pixNum)

#==========================================================================================

similar_dist = np.zeros((colorNum, colorNum), dtype=np.float64)
similar_dist = squareform(pdist(color3fv[0]))

rDist_np = np.zeros((regNum, regNum), np.float64)
regSal1d_np = np.zeros((1, regNum), np.float64)

cdef double[:,:]regs_view2 = regs_np
cdef double[:]mX_view = regs_mX
cdef double[:]mY_view = regs_mY
cdef double[::1]vX_view = regs_vX
cdef double[::1]vY_view = regs_vY

cdef double[:,:]similar_dist_view = similar_dist
cdef double[:,::1]rDist_view = rDist_np
cdef double[:,::1]regSal1d_view = regSal1d_np
cdef double[:,:]freIdx_f64_view = freIdx_f64
cdef long long[:]pixNum_view = pixNum_np
cdef double dd_np = 0.0

with nogil:
for i in range(regNum):
for j in range(regNum):
if i < j:
for m in range(colorNum):
for n in range(colorNum):
if freIdx_f64_view[j, n] != 0.0 and freIdx_f64_view[i, m] != 0.0:
dd_np += similar_dist_view[m,n] * freIdx_f64_view[i, m] * freIdx_f64_view[j, n]
rDist_view[i][j] = dd_np * exp(-1.0 * (pow((regs_view2[i, 2]-regs_view2[j, 2]),2)+pow((regs_view2[i, 3]-regs_view2[j, 3]),2)) / sigmaDist)
rDist_view[j][i] = rDist_view[i][j]
dd_np = 0.0
regSal1d_view[0, i] += pixNum_view[j] * rDist_view[i, j]
regSal1d_view[0, i] *= exp(-9.0 * (pow(regs_view2[i, 0],2) + pow(regs_view2[i, 1],2)))

#==========================================================================================
for yi in range(height):
for xi in range(width):
vX_view[regIdx1i[yi, xi]] += fabs(xi - mX_view[regIdx1i[yi, xi]])
vY_view[regIdx1i[yi, xi]] += fabs(yi - mY_view[regIdx1i[yi, xi]])
regs_vX = np.divide(regs_vX,pixNum_np)
regs_vY = np.divide(regs_vY,pixNum_np)

#=======================在x和y边界区域的边界像素的数量================================
cdef:
vector[int] bPnts0
vector[int] bPnts1
array.array pnt = array.array('i',[0,0])
int [:] pnt_view = pnt
int wGap = <int>(width * ratio + 0.5)
int hGap = <int>(height * ratio + 0.5)
int sx = 0, sx_right = width - wGap
double xR = 0.25* hGap
double yR = 0.25* wGap
double [::1] ybNum = ybNo_np

with nogil:
# top region
while pnt_view[1] != hGap:
pnt_view[0] = sx
while pnt_view[0] != width:
ybNum[regIdx1i[pnt_view[1], pnt_view[0]]] += 1
bPnts0.push_back(pnt_view[0])
bPnts1.push_back(pnt_view[1])
pnt_view[0] += 1
pnt_view[1] += 1

pnt_view[0] = 0
pnt_view[1] = height - hGap
# Bottom region
while pnt_view[1] != height:
pnt_view[0] = sx
while pnt_view[0] != width:
ybNum[regIdx1i[pnt_view[1], pnt_view[0]]] += 1
bPnts0.push_back(pnt_view[0])
bPnts1.push_back(pnt_view[1])
pnt_view[0] += 1
pnt_view[1] += 1

pnt_view[0] = 0
pnt_view[1] = 0
# Left Region
while pnt_view[1] != height:
pnt_view[0] = sx
while pnt_view[0] != wGap:
ybNum[regIdx1i[pnt_view[1], pnt_view[0]]] += 1
bPnts0.push_back(pnt_view[0])
bPnts1.push_back(pnt_view[1])
pnt_view[0] += 1
pnt_view[1] += 1

pnt_view[0] = sx_right
pnt_view[1] = 0
# Right Region
while pnt_view[1] != height:
pnt_view[0] = sx_right
while pnt_view[0] != width:
ybNum[regIdx1i[pnt_view[1], pnt_view[0]]] += 1
bPnts0.push_back(pnt_view[0])
bPnts1.push_back(pnt_view[1])
pnt_view[0] += 1
pnt_view[1] += 1

lk_np = np.zeros(regNum, np.float64)
regL_np = np.zeros(regNum, np.int64)
bReg1u = np.zeros((height, width), np.uint8)
cdef unsigned char [:,::1]bReg1u_view = bReg1u

lk_np = np.divide(np.multiply(ybNum,yR),regs_vX)
regL_np = np.where(np.divide(lk_np,thr)>1,255,0)
bReg1u = np.take(regL_np,regIdx1i)

with nogil:
for iii in range(bPnts0.size()):
bReg1u_view[bPnts1[iii], bPnts0[iii]] = 255
return regSal1d_np,bReg1u

加速版，不知道为何，开启多线程加速，编译失败，代码如下：
以下代码中的中去除了加速模块，若是需要请自行添加。
代码二
#%%cython --cplus --annotate
import numpy as np
cimport cython
cimport numpy as np
from cpython cimport array
from libc.math cimport pow
from libc.math cimport sqrt
from libc.math cimport exp
from libc.math cimport fabs
from libcpp.vector cimport vector
from scipy.spatial.distance import pdist,squareform

@cython.cdivision(True)
@cython.boundscheck(False)
@cython.wraparound(False)
cdef double[:,::1] init_regs(int[:,:] regIdx1i,
double[:,::1]regs_view,
int height,
int width,
float cx,
float cy):
cdef Py_ssize_t x = 0, y = 0
with nogil:
for y in range(height):
for x in range(width):
regs_view[regIdx1i[y, x], 0] = fabs(x - cx)  # ad2c_0
regs_view[regIdx1i[y, x], 1] = fabs(y - cy)  # ad2c_1
regs_view[regIdx1i[y, x], 2] += x            # region center x coordinate
regs_view[regIdx1i[y, x], 3] += y            # region center y coordinate
return regs_view

@cython.cdivision(True)
@cython.boundscheck(False)
@cython.wraparound(False)
cpdef Build_Regions_Contrast(int regNum, np.ndarray[np.int32_t, ndim=2] regIdx1i,
int[:,:] colorIdx1i, float[:,:,:]color3fv, float sigmaDist,float ratio,float thr):
cdef:
int height = regIdx1i.shape[0]
int width = regIdx1i.shape[1]
int colorNum = color3fv.shape[1]
float cx = <float>width / 2.0
float cy = <float>height / 2.0
Py_ssize_t i=0, j=0, m=0, n=0, yi=0, xi=0,iii=0

pixNum_np = np.zeros(regNum,dtype=np.int64)
pixNum_np = np.bincount(regIdx1i.reshape(1, width * height)[0])

ybNo_np = np.zeros(regNum, dtype=np.float64)
regs_mX = np.zeros(regNum, dtype=np.float64)
regs_mY = np.zeros(regNum, dtype=np.float64)
regs_vX = np.zeros(regNum, dtype=np.float64)
regs_vY = np.zeros(regNum, dtype=np.float64)

freIdx_f64 = np.zeros((regNum, colorNum), dtype=np.float64)
regColor_np = np.zeros((regNum, colorNum), dtype=np.int32)
tile_pixNum = np.zeros((regNum, colorNum), dtype=np.int64)
regs_np = np.zeros((regNum, 4), dtype=np.float64)

cdef double[:,::1]regs_view = regs_np
regs_view = init_regs(regIdx1i,regs_view,height,width,cx,cy)

regs_np[:, 0] = np.divide(regs_np[:, 0], pixNum_np * width)
regs_np[:, 1] = np.divide(regs_np[:, 1], pixNum_np * height)

regs_mX = np.divide(regs_np[:, 2], pixNum_np)
regs_mY = np.divide(regs_np[:, 3], pixNum_np)

regs_np[:, 2] = np.divide(regs_mX, width)
regs_np[:, 3] = np.divide(regs_mY, height)

freIdx_f64 = regColor_np.astype(np.float64)
tile_pixNum = np.tile(pixNum_np[:, np.newaxis], (1, colorNum))
freIdx_f64 = np.divide(freIdx_f64, tile_pixNum)

#==========================================================================================

similar_dist = np.zeros((colorNum, colorNum), dtype=np.float64)
similar_dist = squareform(pdist(color3fv[0]))

rDist_np = np.zeros((regNum, regNum), np.float64)
regSal1d_np = np.zeros((1, regNum), np.float64)

cdef double[:]mX_view = regs_mX
cdef double[:]mY_view = regs_mY
cdef double[::1]vX_view = regs_vX
cdef double[::1]vY_view = regs_vY

cdef double[:,:]similar_dist_view = similar_dist
cdef double[:,::1]rDist_view = rDist_np
cdef double[:,::1]regSal1d_view = regSal1d_np
cdef double[:,:]freIdx_f64_view = freIdx_f64
cdef long long[:]pixNum_view = pixNum_np
cdef double dd_np = 0.0

with nogil:
for i in range(regNum):
for j in range(regNum):
if i < j:
for m in range(colorNum):
for n in range(colorNum):
if freIdx_f64_view[j, n] != 0.0 and freIdx_f64_view[i, m] != 0.0:
dd_np += similar_dist_view[m,n] * freIdx_f64_view[i, m] * freIdx_f64_view[j, n]
rDist_view[i][j] = dd_np * exp(-1.0 * (pow((regs_view[i, 2]-regs_view[j, 2]),2)+pow((regs_view[i, 3]-regs_view[j, 3]),2)) / sigmaDist)
rDist_view[j][i] = rDist_view[i][j]
dd_np = 0.0
regSal1d_view[0, i] += pixNum_view[j] * rDist_view[i, j]
regSal1d_view[0, i] *= exp(-9.0 * (pow(regs_view[i, 0],2) + pow(regs_view[i, 1],2)))

#==========================================================================================
for yi in range(height):
for xi in range(width):
vX_view[regIdx1i[yi, xi]] += fabs(xi - mX_view[regIdx1i[yi, xi]])
vY_view[regIdx1i[yi, xi]] += fabs(yi - mY_view[regIdx1i[yi, xi]])
regs_vX = np.divide(regs_vX,pixNum_np)
regs_vY = np.divide(regs_vY,pixNum_np)

#==================在x和y边界区域的边界像素的数量==========================
cdef:
vector[int] bPnts0
vector[int] bPnts1
array.array pnt = array.array('i',[0,0])
int [:] pnt_view = pnt
int wGap = <int>(width * ratio + 0.5)
int hGap = <int>(height * ratio + 0.5)
int sx = 0, sx_right = width - wGap
double xR = 0.25* hGap
double yR = 0.25* wGap
double [::1] ybNum = ybNo_np

with nogil:
# top region
while pnt_view[1] != hGap:
pnt_view[0] = sx
while pnt_view[0] != width:
ybNum[regIdx1i[pnt_view[1], pnt_view[0]]] += 1
bPnts0.push_back(pnt_view[0])
bPnts1.push_back(pnt_view[1])
pnt_view[0] += 1
pnt_view[1] += 1

pnt_view[0] = 0
pnt_view[1] = height - hGap
# Bottom region
while pnt_view[1] != height:
pnt_view[0] = sx
while pnt_view[0] != width:
ybNum[regIdx1i[pnt_view[1], pnt_view[0]]] += 1
bPnts0.push_back(pnt_view[0])
bPnts1.push_back(pnt_view[1])
pnt_view[0] += 1
pnt_view[1] += 1

pnt_view[0] = 0
pnt_view[1] = 0
# Left Region
while pnt_view[1] != height:
pnt_view[0] = sx
while pnt_view[0] != wGap:
ybNum[regIdx1i[pnt_view[1], pnt_view[0]]] += 1
bPnts0.push_back(pnt_view[0])
bPnts1.push_back(pnt_view[1])
pnt_view[0] += 1
pnt_view[1] += 1

pnt_view[0] = sx_right
pnt_view[1] = 0
# Right Region
while pnt_view[1] != height:
pnt_view[0] = sx_right
while pnt_view[0] != width:
ybNum[regIdx1i[pnt_view[1], pnt_view[0]]] += 1
bPnts0.push_back(pnt_view[0])
bPnts1.push_back(pnt_view[1])
pnt_view[0] += 1
pnt_view[1] += 1

lk_np = np.zeros(regNum, np.float64)
regL_np = np.zeros(regNum, np.int64)
bReg1u = np.zeros((height, width), np.uint8)
cdef unsigned char [:,::1]bReg1u_view = bReg1u

lk_np = np.divide(np.multiply(ybNum,yR),regs_vX)
regL_np = np.where(np.divide(lk_np,thr)>1,255,0)
bReg1u = np.take(regL_np,regIdx1i)

with nogil:
for iii in range(bPnts0.size()):
bReg1u_view[bPnts1[iii], bPnts0[iii]] = 255
return regSal1d_np,bReg1u

编译发出警告：LINK : warning LNK4044: 无法识别的选项“/openmp”；已忽略
setup代码如下：
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np

filename = 'Regions_GetBorderReg_parallel'
full_filename = 'Regions_GetBorderReg_parallel.pyx'

ext_modules = [Extension(filename, [full_filename],
language='c++',
extra_compile_args=['-O3', '-march=native', '-ffast-math', '/openmp'],

setup(
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules,
include_dirs=[np.get_include()]
)

测试代码
%%cython --cplus --annotate
import numpy as np
cimport cython
cimport numpy as np
from cpython cimport array
from libc.math cimport pow
from libc.math cimport sqrt
from libc.math cimport exp
from libc.math cimport fabs
from libcpp.vector cimport vector
cimport openmp
from scipy.spatial.distance import pdist,squareform
from cython.parallel import prange, parallel

@cython.cdivision(True)
@cython.boundscheck(False)
@cython.wraparound(False)
cdef double[:,::1] init_regs(int[:,:] regIdx1i,
double[:,::1]regs_view,
int x,
int y,
int height,
int width,
float cx,
float cy):
openmp.omp_set_dynamic(1)
with nogil,parallel():
for y in range(height):
for x in range(width):
regs_view[regIdx1i[y, x], 0] = fabs(x - cx)
regs_view[regIdx1i[y, x], 1] = fabs(y - cy)
regs_view[regIdx1i[y, x], 2] += x
regs_view[regIdx1i[y, x], 3] += y

return regs_view

%%cython --cplus --annotate
import numpy as np
cimport cython
cimport numpy as np
from cpython cimport array
from libc.math cimport pow
from libc.math cimport sqrt
from libc.math cimport exp
from libc.math cimport fabs
from libcpp.vector cimport vector
cimport openmp
from scipy.spatial.distance import pdist,squareform
from cython.parallel import prange, parallel

@cython.cdivision(True)
@cython.boundscheck(False)
@cython.wraparound(False)
cdef double[:,::1] init_regs(int[:,:] regIdx1i,
double[:,::1]regs_view,
int x,
int y,
int height,
int width,
float cx,
float cy):
for y in prange(height,schedule='dynamic'):
for x in range(width):
regs_view[regIdx1i[y, x], 0] = fabs(x - cx)  # ad2c_0
regs_view[regIdx1i[y, x], 1] = fabs(y - cy)  # ad2c_1
regs_view[regIdx1i[y, x], 2] += x            # region center x coordinate
regs_view[regIdx1i[y, x], 3] += y            # region center y coordinate
return regs_view


参考
https://stackoverflow.com/questions/40451203/cython-parallel-loop-problems


展开全文
• ## 图像对比度

热门讨论 2013-02-03 16:29:05
此 matlab 代码可以快速有效的进行图像对比度计算
• 当r=1时，图像不变，如果图像整体或者感兴趣区域较暗，则令0,可以增加图像对比度， 相反，如果图像整体或者感兴趣区域较亮，则令r>1可以降低图像对比度。 综上： 1、对图像进行伽马变换时，首先将灰度值转化到[0...
1、灰度直方图

#灰度直方图
import numpy as np
import matplotlib.pyplot as plt
import cv2
import sys

def calGrayHist(img):
rows,cols=img.shape[:2]
cnt=np.zeros([256],dtype=np.uint32)
for i in range(rows):
for j in range(cols):
cnt[img[i][j]]+=1
return cnt
if __name__=="__main__":
grayHist=calGrayHist(img)
cv2.imshow('1',img)
cv2.waitKey(2000)
xrange=range(256)
plt.plot(xrange,grayHist,'+',linewidth=2,c='green')
y_maxValue=np.max(grayHist)
print(y_maxValue)
plt.axis([0,255,0,y_maxValue])
plt.xlabel('gray Level')
plt.ylabel('number of pixels')
plt.show()

#Matplotlib 自带的库
import sys
import numpy as np
import matplotlib.pyplot as plt
from cv2 import *

if __name__=="__main__":
rows,cols=img.shape[:2]
print(img.shape)
pixelSequence=img.reshape([rows*cols])
numberBins=256
histogram,bins,patch=plt.hist(pixelSequence,numberBins,facecolor='black',histtype='bar')
plt.xlabel(u"gray Level")
plt.ylabel(u'number of pixels')
y_maxValue=np.max(histogram)
print(y_maxValue)
plt.axis([0,256,0,y_maxValue])
plt.show()

2、线性变换

O=a*I+b

a=1,b=0:原图的一个副本，a>1,则输出图像O对比度比I增大，0<a<1，O的对比度比I小。

b的值影响图像的亮度，当b>0时，亮度增加；b<0时亮度减少。

import cv2
import numpy as np

a=2
t=float(a)*img
t[t>255]=255
t=np.round(t)
t=t.astype(np.uint8)

cv2.imshow("img",img)
cv2.imshow('t',t)
cv2.waitKey(0)
cv2.destroyAllWindows()

#include<opencv2\core\core.hpp>
#include<opencv2\highgui.hpp>
#include<opencv2\imgproc.hpp>
using namespace cv;

int main()
{
int rows = img.rows;
int cols = img.cols;

//线性变换第一种，通过Mat的成员函数	Mat::converTo(OutputArray m,int rtype,double alpha=1,double beta=0);
Mat out;
img.convertTo(out,CV_8UC1,2.0,0);
imshow("out1", out);
//线性变化第二种，使用乘法运算符,无论常数是什么类型，输出的矩阵的数据类型总是和输入矩阵类型相同。
Mat out1;
out1 = 2 * img+100;
imshow("out2",out1);
//线性变化第三种，基于OpenCV提供的函数：convertScaleAbs(InputArray src,OutputArray dst,double alpha=1,double beta=0);
Mat out2;
convertScaleAbs(img,out2,2,10);
imshow("out3", out2);
waitKey(0);
}

3、直方图正规化

import numpy as np
import cv2
import Hist

Hist.Hist(img,"RAW")
mx=np.max(img)
mi=np.min(img)
cv2.imshow("1",img)
print(mx,mi)
outx=255
outi=0
a=float(outx-outi)/(mx-mi)
b=outi-a*mi
print(a,b)
out=a*img+b
out[out>255]=255
out=np.round(out)
out=np.uint8(out)
cv2.imshow("out",out)
Hist.Hist(out,"After")
cv2.waitKey(0)


#include<opencv2\core\core.hpp>
#include<opencv2\highgui.hpp>
#include<opencv2\imgproc\imgproc.hpp>

using namespace cv;
/*
void minMaxLoc(InputArray src,dounle *minVal,double *maxVal,Point *minLoc=0,Point *maxLoc=0,InputArray mask=noArray());
src		:输入矩阵
minVal  :最小值，double类型指针
maxVal	:最大值，double类型指针
minLoc	:最小位置的索引，Point类型的指针
maxLoc	:最大位置的索引，Point类型的指针
*/
int main()
{
if (!img.data)
return -1;
double Imax, Imin;
minMaxLoc(img,&Imin,&Imax,NULL,NULL);
double Omax = 255, Omin = 0;
double a = (Omax - Omin) / (Imax - Imin);
double b = Omin - a*Imin;
Mat out;
convertScaleAbs(img, out, a, b);
imshow("RAW", img);
imshow("OUT", out);
waitKey(0);

}

4、正规划normalize

import cv2
import  numpy as np
from Hist import *

Hist(img,"RAW")
dst=img
cv2.normalize(img,dst,255,0,cv2.NORM_MINMAX,cv2.CV_8U)
Hist(dst,"dst")
cv2.imshow('raw',img)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()

#include<opencv2\core.hpp>
#include<opencv2\highgui.hpp>
#include<opencv2\imgproc.hpp>

using namespace cv;
/*
void normalize(InputArray src,OutputArray dst,double alpha=1,double beta=0,int norm_type=NORM_L2,int type=-1,InputArray mask=noArray());
src	:输入矩阵
dst :结构元
alpha :结构元的锚点
beta  :腐蚀操作的次数
norm_type:边界扩充类型（NORM_type=NORM_L1  NORM_L2  NORM_MINMAX）
dtype	 :边界扩充值
*/
int main()
{
if (!src.data)
return -1;
Mat dst;
normalize(src,dst,255,0,NORM_MINMAX,CV_8U);
imshow("原图",src);
imshow("正则化后",dst);
waitKey(0);
}

5、伽马变换

#include<opencv2\opencv.hpp>
#include<opencv2\highgui\highgui.hpp>
#include<opencv2\imgproc\imgproc.hpp>

using namespace cv;
/*
输入图像，首先将其灰度值归一化到[0,1]范围内，对于8位图来说，除以255即可，
I(r,c)代表归一化后的第r行第c列的灰度值，输出图像记为O，O(r,c)=I(r,c)^r
当r=1时，图像不变，如果图像整体或者感兴趣区域较暗，则令0<r<1,可以增加图像对比度，
相反，如果图像整体或者感兴趣区域较亮，则令r>1可以降低图像对比度。
综上：
1、对图像进行伽马变换时，首先将灰度值转化到[0,1]范围
2、进行幂运算
*/
int main()
{
Mat dst,out;
img.convertTo(dst, CV_64F, 1.0 / 255.0, 0);
double gamma = 0.5;
pow(dst, gamma, out);
//std::cout << out << std::endl;
imshow("out", out);
waitKey(0);
Mat tmp;
out.convertTo(tmp, CV_8U, 255, 0);
imwrite("out.jpg", tmp);
system("pause");
return 0;
}

import cv2
import numpy as np

out=img/255.0
gamma=0.5
out=np.power(out,gamma)
out=out*255.0
out=np.round(out)
out=out.astype(np.uint8)

cv2.imshow('img',img)
cv2.imshow('out',out)
cv2.waitKey(0)
cv2.destroyAllWindows()

6、全局直方图均衡化

import cv2
import numpy as np
import math
from calcGrayHist import *
"""
1、计算图像的灰度直方图
2、计算灰度直方图的累加直方图
3、根据累加直方图和直方图均衡化原理得到输入灰度级和输出灰度级之间的映射关系。
4、根据第三步得到的灰度级映射关系，循环得到输出图像的每一个像素的灰度级
"""
def equalHist(img):
rows,cols=img.shape
grayHist=Hist(img)

zeroCumuMoment=np.zeros([256],np.uint32)
zeroCumuMoment[0]=grayHist[0];
for  i in range(1,256):
zeroCumuMoment[i]=zeroCumuMoment[i-1]+grayHist[i]
outPut_q=np.zeros([256],np.uint8)
cofficient=256.0/(rows*cols)
for i in range(256):
q=float(zeroCumuMoment[i])*cofficient-1
if q>=0:
outPut_q[i]=math.floor(q)
else:
outPut_q[i]=0
out=np.zeros(img.shape,np.uint8)
for r in range(rows):
for c in range(cols):
out[r][c]=outPut_q[img[r][c]]

return out

if __name__=="__main__":
out=equalHist(img)
print(out)
cv2.imshow("raw",img)
cv2.imshow("equalHist",out)
cv2.waitKey(0)
cv2.destroyAllWindows()

#include<opencv2\core.hpp>
#include<opencv2\highgui.hpp>
#include<opencv2\imgproc.hpp>
#include<iostream>
using namespace cv;

Mat equalHist(Mat img);
Mat calcGrayHist(Mat img);

int main()
{
std::cout << img.type() << std::endl;
if (!img.data)
return -1;
Mat out = equalHist(img);
imshow("raw", img);
imshow("equalHist", out);
waitKey(0);
return 0;
}
Mat equalHist(Mat img)
{
CV_Assert(img.type()==CV_8UC1);//若括号中的表达式值为false，则返回一个错误信息。
int rows = img.rows;
int cols = img.cols;
//求图像灰度直方图
Mat grayHist = calcGrayHist(img);
//累加灰度直方图
Mat zeroCumuMoment = Mat::zeros(Size(256,1),CV_32SC1);
zeroCumuMoment.at<int>(0,0)= grayHist.at<int>(0,0);
for (int i = 1; i < 256; i++)
zeroCumuMoment.at<int>(0, i) = zeroCumuMoment.at<int>(0, i - 1) + grayHist.at<int>(0, i);
//根据累加直方图得到输入灰度级和输出灰度级之间的映射关系
Mat outPut_q = Mat::zeros(Size(256,1),CV_8UC1);
float cofficient = 256.0 / (rows*cols);
for (int i = 0; i < 256; i++)
{
float q = cofficient*zeroCumuMoment.at<int>(0, i) - 1;
if (q >= 0)
outPut_q.at<uchar>(0, i) = uchar(floor(q));
else
outPut_q.at<uchar>(0, i) = 0;
}
//计算直方图均衡化后的图像
Mat equalHistImage = Mat::zeros(img.size(),CV_8UC1);
for (int r = 0; r < rows; r++)
{
for (int c = 0; c < cols; c++)
{

int p = img.at<uchar>(r, c);
equalHistImage.at<uchar>(r, c) = outPut_q.at<uchar>(0, p);
}
}
return equalHistImage;
}
Mat calcGrayHist(Mat img)
{
Mat cnt = Mat::zeros(Size(256,1),CV_32SC1);
int rows = img.rows;
int cols = img.cols;
for (int i = 0; i < rows; i++)
for (int j = 0; j < cols; j++)
{
cnt.at<int>(0, img.at<uchar>(i, j)) += 1;
}
return cnt;
}

7、限制对比度的自适应直方图均衡化

#include<opencv2\core.hpp>
#include<opencv2\highgui.hpp>
#include<opencv2\imgproc.hpp>

using namespace cv;
/*
自适应直方图均衡化首先将图像划分为不重叠的区域块，然后对每一个块分别进行直方图均衡化，
显然在没有噪声的情况下，每一个小区域的灰度直方图会被限制再一个小的灰度级范围内，
但是如果有噪声，会被放大。
为了避免出现这种情况，提出“限制对比度”（contrast Limiting），如果直方图的bin超过了设定好的“限制对比度”，
那么会被裁减，然后将裁剪的部分均匀分布到其他的bin。
OpenCV中提供的函数createCLAHE构建指向CLAHE对象的指针，默认设置“限制对比度”为40。
*/
int main()
{
Ptr<CLAHE> clahe = createCLAHE(2.0, Size(8, 8));
Mat dst;
clahe->apply(src, dst);
imshow("原图", src);
imshow("对比度增强",dst);
waitKey(0);
destroyAllWindows();
return 0;
}

import cv2
import numpy as np

if __name__=="__main__":

clahe=cv2.createCLAHE(clipLimit=2.0,tileGridSize=(8,8))
dst=clahe.apply(src)
cv2.imshow("src",src)
cv2.imshow("clahe",dst)
cv2.waitKey(0)
cv2.destroyAllWindows()



展开全文
• 【实验名称】 图像对比度调整
• 这是一组Matlab函数，可用于计算输入图像（自然场景）的局部对比度统计信息。 使用旨在模拟LGN接收场的多尺度滤波器来计算局部对比度图像上的局部对比度大小图以直方图形式汇总，然后以两种方式进行表征：1）使用...
• cg = b / (4 * (m - 2) * (n - 2) + 3 * (2 * (m - 2) + 2 * (n - 2)) + 2 * 4) # 对应上面48的计算公式 print(cg) def brightness(im_file): im = Image.open(im_file).convert('L') stat = ImageStat.Stat(im) ...
• 目的为解决局部带限对比度存在的问题，提出一种对比敏感度下的图像对比度评价方法．方法对低通滤波后的图像进行快速小波分解，然后对各级小波系数进行处理分别得到各级带通滤波图像及其相应低通滤波图像，求出Peli...
• 图像对比度就是对图像颜色和亮度差异感知，对比度越大，图像的对象与周围差异性也就越大，反之亦然。 调整图像对比度的方法大致如下：（前提为对比度系数用户输入范围为【-100~100】） 1）读取每个RGB像素值Prgb，...
• <p>64帧.bin格式的图像，每张照片的曝光时间是1ms，进行合成32张2ms,16张4ms，8张8ms,4张16ms,2张32ms和1张64ms,再给每个曝光时间求对比值（方差/均值），然后根据一个公式求...
• 图像对比度增强 1. 线性变换 通过y=ax+b对灰度值进行处理，例如对于过暗的图片，其灰度分布在[0,100], 选择a=2,b=10能将灰度范围拉伸到[10, 210]。 需要根据情况设置a和b的值。 cv2.convertScaleAbs(img,alpha=1.5,...
• 一 提升图像对比度和亮度 二 代码实现 三 实现效果 注：原创不易，转载请务必注明原作者和出处，感谢支持！ 一 提升图像对比度和亮度 一般来说图像的变换可以分成以下两类： (1)像素变换 在像素变换中，...
• ## 彩色图像对比度

千次阅读 2015-10-14 13:58:48
对比度的定义：简单的来说就是使亮...网上最常用的调整对比度的算法是根据图像的灰度进行调整 下面是算法：   1、计算平均亮度 2、每点与平均亮度比较，得到差值。 3、新的亮度 = 平均亮度 + 系数 *
• 灰度差分统计法计算图像对比度插入链接与图片如何插入一段漂亮的代码片生成一个适合你的列表创建一个表格设定内容居中、居左、居右SmartyPants创建一个自定义列表如何创建一个注脚注释也是必不可少的KaTeX数学公式...
• 数字图像的亮度与对比度 亮度：设uint8型的灰度图像的二维矩阵为A，设置常整数-255<=c<=255，则A+c就表示亮度的调整。若A+c超过255则自动设置为255，反之若A+c小于0则自动设置为0 对比度：设uint8型的灰度...
• 掩膜操作 提高图像对比度 1.获取图像像素指针 CV_Assert(src.depth() == CV_8U); Mat.ptr(int i=0) 获取...掩膜操作是指根据掩膜矩阵（掩膜mask，也称作核kernel）重新计算图像中每个像素的值，实现图像对比度提高。
• 采用滑动邻域操作对声纳图像进行了对比度增强处理;重点介绍了MATLAB下的数据并行编程，利用分布式数组设计了集群环境下的图像增强并行算法。实验结果表明,MATLAB强大的内部函数使得并行计算易于实现，有效地提高了...

...