精华内容
下载资源
问答
  • watershed算法

    2011-11-20 00:02:53
    watershed的代码,是学习的好资料啊,经典的代码的啊,要的话赶紧下啊
  • Watershed算法

    2017-06-01 20:19:25
    先找一些资料,然后再分析和实现 参考资料: http://cmm.ensmp.fr/~beucher/wtshed....https://cn.mathworks.com/company/newsletters/articles/the-watershed-transform-strategies-for-image-segmentation.html
    展开全文
  • 任何灰度图像都可以看作是一个地形表面,其中高强度表示山峰,低强度表示山谷。当用不同颜色的水 (标签)填充每个孤立的山谷,随着水位的上升,来自不同山谷的水明显...Opencv中实现了一个基于标记的分水岭算法,可以指

    1 理论

      任何灰度图像都可以看作是一个地形表面,其中高强度表示山峰,低强度表示山谷。当用不同颜色的水 (标签)填充每个孤立的山谷,随着水位的上升,来自不同山谷的水明显会开始合并,颜色也不同。
      为了避免这种情况,需要在水融合的地方建造屏障。继续填满水,建造屏障,直到所有的山峰都在水下。创建的屏障将返回分割结果,这便是Watershed背后的思想
      然而,这种方法会由于图像中的噪声或者其他不规则性而产生过度分割的结果。Opencv中实现了一个基于标记的分水岭算法,可以指定哪些是以要合并的山谷点,哪些不是。
      这是一个交互式的图像分割,所做的是给我们知道的对象赋予不同的标签:
      1)用一种颜色或者强度标记我们确定的前景或对象的区域;
      2)用另一种颜色标记确定为背景或非对象的区域;
      3)用零标记不确定的区域;
      4)应用分水岭算法,对象的边界值将为 − 1 -1 1

    2 测试图像

    3 寻找硬币的近似估计

      测试图像中的硬币彼此接触,即使设置阈值,也不会改变这个情况:

    import cv2 as cv
    import numpy as np
    
    img = cv.imread('coins.png')
    gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
    ret, thresh = cv.threshold(gray, 0, 255, cv.THRESH_BINARY_INV + cv.THRESH_OTSU)
    thresh = cv.merge([thresh, thresh, thresh])
    img_thresh = np.hstack([img, thresh])
    cv.imshow("", img_thresh)
    cv.waitKey()
    

      输出如下:

    4 去除图像中的白色噪点

      为了去除图像中的白色噪点,可以使用形态学扩张;去除小孔,则可以使用形态学侵蚀。
      因此,目前可以确定的是,靠近对象中心的区域是前景,远离的是背景,那么边界区域了?这是唯一的不确定因素:

    #coding: utf-8
    import cv2 as cv
    import numpy as np
    
    img = cv.imread('coins.png')
    gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
    _, thresh = cv.threshold(gray, 0, 255, cv.THRESH_BINARY_INV + cv.THRESH_OTSU)
    
    """去除噪声"""
    kernel = np.ones((3, 3), np.uint8)
    opening = cv.morphologyEx(thresh, cv.MORPH_OPEN, kernel, iterations=2)
    
    """寻找背景区域"""
    sure_bg = cv.dilate(opening, kernel, iterations=3)
    dist_transform = cv.distanceTransform(opening, cv.DIST_L2, 5)
    _, sure_fg = cv.threshold(dist_transform, 0.7*dist_transform.max(), 255, 0)
    sure_fg = np.uint8(sure_fg)
    
    """寻找不确定区域"""
    unknow = cv.subtract(sure_bg, sure_fg)
    
    sure_fg = cv.merge([sure_fg, sure_fg, sure_fg])
    sure_bg = cv.merge([sure_bg, sure_bg, sure_bg])
    unknow = cv.merge([unknow, unknow, unknow])
    ret_img = np.hstack((img, sure_fg, sure_bg, unknow))
    cv.imshow("", ret_img)
    cv.waitKey()
    

      输出如下:

    5 分水岭

      现在知道了前景、背景以及不确定区域,为了使用分水岭算法,我们使用cv.connectedComponents() 将背景标记为零,其他对象则使用从1开始的整数标记。
      但是分水岭算法中,0标记表示未知区域,所以位置区域标记为0,余下的标记为正整数:

    #coding: utf-8
    import cv2 as cv
    import numpy as np
    
    img = cv.imread('coins.png')
    gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
    _, thresh = cv.threshold(gray, 0, 255, cv.THRESH_BINARY_INV + cv.THRESH_OTSU)
    
    """去除噪声"""
    kernel = np.ones((3, 3), np.uint8)
    opening = cv.morphologyEx(thresh, cv.MORPH_OPEN, kernel, iterations=2)
    
    """寻找背景区域"""
    sure_bg = cv.dilate(opening, kernel, iterations=3)
    dist_transform = cv.distanceTransform(opening, cv.DIST_L2, 5)
    _, sure_fg = cv.threshold(dist_transform, 0.7*dist_transform.max(), 255, 0)
    sure_fg = np.uint8(sure_fg)
    
    """寻找不确定区域"""
    unknow = cv.subtract(sure_bg, sure_fg)
    
    """标记"""
    _, markers = cv.connectedComponents(sure_fg)
    markers = markers + 1
    markers[unknow == 255] = 0
    
    """分水岭"""
    img_copy = img.copy()
    markers = cv.watershed(img_copy, markers)
    img_copy[markers == -1] = [255, 0, 0]
    
    ret_img = np.hstack((img, img_copy))
    
    cv.imshow("", ret_img)
    cv.waitKey()
    
    

      输出如下:

    展开全文
  • 这里使用一个有关如何使用距离变换和分水岭来分割相互接触的对象的例子来说明分水岭算法的用法。 使用Otsu二值化寻找目标的近似估计 代码示例: from matplotlib import pyplot as plt import numpy as np import ...

    这里使用一个有关如何使用距离变换和分水岭来分割相互接触的对象的例子来说明分水岭算法的用法。

    1. 使用Otsu二值化寻找目标的近似估计

    代码示例:

    from matplotlib import pyplot as plt
    import numpy as np
    import cv2 as cv
    img = cv.imread("C:\\Users\\dell\\Desktop\\prac files\\prac18.jpg")
    img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
    ret, thresh = cv.threshold(img_gray, 0, 255, cv.THRESH_BINARY_INV+cv.THRESH_OTSU)
    cv.imshow("thresh", thresh)
    
    1. 确定一定是背景/前景(目标)的区域,从而确定未知区域

    代码示例:

    #去噪
    kernel = np.ones((3, 3), np.uint8)
    opening = cv.morphologyEx(thresh, cv.MORPH_OPEN, kernel, iterations=2)
    #确定一定是背景的区域
    sure_bg = cv.dilate(thresh, kernel, iterations=3)
    #确定一定是前景的区域
    dist_transform = cv.distanceTransform(opening, cv.DIST_L2, 5)
    ret2, sure_fg = cv.threshold(dist_transform, 0.7*dist_transform.max(), 255, 0)
    # 找到未知区域
    sure_fg = np.uint8(sure_fg)
    unknown = cv.subtract(sure_bg, sure_fg)
    

    这里需要注意的是,由于对象边界彼此接触,所以不能简单的用侵蚀来确定一定是前景的区域,这里需要找到距离变换并应用适当的阈值。

    1. 为图像的背景、前景以及未知区域进行标记

    主要使用cv2.connectedComponents()函数来实现。

    num_objects, labels = cv2.connectedComponents(image)
    

    参数如下:

    • image:输入图像,必须是二值图,即8位单通道图像
    • num_labels:所有连通域的数目
    • labels:图像上每一像素的标记,用数字1、2、3…表示(不同的数字表示不同的连通域)

    在这里需要将未知区域标记为0,而不是背景区域,否则分水岭会将其视为未知区域。

    代码示例:

    # 类别标记
    ret, markers = cv.connectedComponents(sure_fg)
    # 为所有的标记加1,保证背景是1而不是0
    markers = markers+1
    # 现在让所有的未知区域为0
    markers[unknown==255] = 0
    
    1. 使用分水岭算法进行分割

    主要使用cv.watershed()函数来实现,最后的标记图像将被修改,边界区域将标记为-1。

    markers=cv.watershed(image, markers)
    

    参数如下:

    • image:输入8位3通道图像
    • markers:标记的输入/输出32位单通道图像,它的大小与image相同

    代码示例:

    # 使用分水岭算法进行分割
    markers = cv.watershed(img, markers)
    img[markers == -1] = [255, 0, 0]
    
    展开全文
  • 图像分割学习opencv是为了工程应用,只学习不应用,等于白学习。下面分析一个图像分割的例子,以加强学习。目标 学习使用cv::filter2D执行一些...学习使用cv::watershed从背景中隔离物体 代码#include <opencv2/

    图像分割

    学习opencv是为了工程应用,只学习不应用,等于白学习。下面分析一个图像分割的例子,以加强学习。

    目标

    • 学习使用cv::filter2D执行一些laplacian滤波来锐化图像
    • 学习使用cv::distanceTransform来获得二进制图像的导出表示,其中每个像素的值被替换为最近的背景像素的距离
    • 学习使用cv::watershed从背景中隔离物体

    代码

    #include <opencv2/opencv.hpp>
    #include <iostream>
    using namespace std;
    using namespace cv;
    int main(int, char** argv)
    {
        // Load the image
        Mat src = imread(argv[1]);
        // Check if everything was fine
        if (!src.data)
            return -1;
        // Show source image
        imshow("Source Image", src);
        // Change the background from white to black, since that will help later to extract
        // better results during the use of Distance Transform
        for( int x = 0; x < src.rows; x++ ) {
          for( int y = 0; y < src.cols; y++ ) {
              if ( src.at<Vec3b>(x, y) == Vec3b(255,255,255) ) {
                src.at<Vec3b>(x, y)[0] = 0;
                src.at<Vec3b>(x, y)[1] = 0;
                src.at<Vec3b>(x, y)[2] = 0;
              }
            }
        }
        // Show output image
        imshow("Black Background Image", src);
        // Create a kernel that we will use for accuting/sharpening our image
        Mat kernel = (Mat_<float>(3,3) <<
                1,  1, 1,
                1, -8, 1,
                1,  1, 1); // an approximation of second derivative, a quite strong kernel
        // do the laplacian filtering as it is
        // well, we need to convert everything in something more deeper then CV_8U
        // because the kernel has some negative values,
        // and we can expect in general to have a Laplacian image with negative values
        // BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
        // so the possible negative number will be truncated
        Mat imgLaplacian;
        Mat sharp = src; // copy source image to another temporary one
        filter2D(sharp, imgLaplacian, CV_32F, kernel);
        src.convertTo(sharp, CV_32F);
        Mat imgResult = sharp - imgLaplacian;
        // convert back to 8bits gray scale
        imgResult.convertTo(imgResult, CV_8UC3);
        imgLaplacian.convertTo(imgLaplacian, CV_8UC3);
        // imshow( "Laplace Filtered Image", imgLaplacian );
        imshow( "New Sharped Image", imgResult );
        src = imgResult; // copy back
        // Create binary image from source image
        Mat bw;
        cvtColor(src, bw, CV_BGR2GRAY);
        threshold(bw, bw, 40, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
        imshow("Binary Image", bw);
        // Perform the distance transform algorithm
        Mat dist;
        distanceTransform(bw, dist, CV_DIST_L2, 3);
        // Normalize the distance image for range = {0.0, 1.0}
        // so we can visualize and threshold it
        normalize(dist, dist, 0, 1., NORM_MINMAX);
        imshow("Distance Transform Image", dist);
        // Threshold to obtain the peaks
        // This will be the markers for the foreground objects
        threshold(dist, dist, .4, 1., CV_THRESH_BINARY);
        // Dilate a bit the dist image
        Mat kernel1 = Mat::ones(3, 3, CV_8UC1);
        dilate(dist, dist, kernel1);
        imshow("Peaks", dist);
        // Create the CV_8U version of the distance image
        // It is needed for findContours()
        Mat dist_8u;
        dist.convertTo(dist_8u, CV_8U);
        // Find total markers
        vector<vector<Point> > contours;
        findContours(dist_8u, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
        // Create the marker image for the watershed algorithm
        Mat markers = Mat::zeros(dist.size(), CV_32SC1);
        // Draw the foreground markers
        for (size_t i = 0; i < contours.size(); i++)
            drawContours(markers, contours, static_cast<int>(i), Scalar::all(static_cast<int>(i)+1), -1);
        // Draw the background marker
        circle(markers, Point(5,5), 3, CV_RGB(255,255,255), -1);
        imshow("Markers", markers*10000);
        // Perform the watershed algorithm
        watershed(src, markers);
        Mat mark = Mat::zeros(markers.size(), CV_8UC1);
        markers.convertTo(mark, CV_8UC1);
        bitwise_not(mark, mark);
    //    imshow("Markers_v2", mark); // uncomment this if you want to see how the mark
                                      // image looks like at that point
        // Generate random colors
        vector<Vec3b> colors;
        for (size_t i = 0; i < contours.size(); i++)
        {
            int b = theRNG().uniform(0, 255);
            int g = theRNG().uniform(0, 255);
            int r = theRNG().uniform(0, 255);
            colors.push_back(Vec3b((uchar)b, (uchar)g, (uchar)r));
        }
        // Create the result image
        Mat dst = Mat::zeros(markers.size(), CV_8UC3);
        // Fill labeled objects with random colors
        for (int i = 0; i < markers.rows; i++)
        {
            for (int j = 0; j < markers.cols; j++)
            {
                int index = markers.at<int>(i,j);
                if (index > 0 && index <= static_cast<int>(contours.size()))
                    dst.at<Vec3b>(i,j) = colors[index-1];
                else
                    dst.at<Vec3b>(i,j) = Vec3b(0,0,0);
            }
        }
        // Visualize the final image
        imshow("Final Result", dst);
        waitKey(0);
        return 0;

    代码说明

    1. 通过文件加载图像,并检查显示。
        // Load the image
        Mat src = imread(argv[1]);
        // Check if everything was fine
        if (!src.data)
            return -1;
        // Show source image
        imshow("Source Image", src);

    2.如果图像背景是白色的,最好转化成黑色的,在距离变换时这将有助于前景区分对象。(这个操作很生硬,因为很多时候图像都不是纯色)

        // Change the background from white to black, since that will help later to extract
        // better results during the use of Distance Transform
        for( int x = 0; x < src.rows; x++ ) {
          for( int y = 0; y < src.cols; y++ ) {
              if ( src.at<Vec3b>(x, y) == Vec3b(255,255,255) ) {
                src.at<Vec3b>(x, y)[0] = 0;
                src.at<Vec3b>(x, y)[1] = 0;
                src.at<Vec3b>(x, y)[2] = 0;
              }
            }
        }
        // Show output image
        imshow("Black Background Image", src);

    3.接下来锐化图像来强化前景物体的边缘。通过使用laplacian滤波。

        // Create a kernel that we will use for accuting/sharpening our image
        Mat kernel = (Mat_<float>(3,3) <<
                1,  1, 1,
                1, -8, 1,
                1,  1, 1); // an approximation of second derivative, a quite strong kernel
        // do the laplacian filtering as it is
        // well, we need to convert everything in something more deeper then CV_8U
        // because the kernel has some negative values,
        // and we can expect in general to have a Laplacian image with negative values
        // BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
        // so the possible negative number will be truncated
        Mat imgLaplacian;
        Mat sharp = src; // copy source image to another temporary one
        filter2D(sharp, imgLaplacian, CV_32F, kernel);
        src.convertTo(sharp, CV_32F);
        Mat imgResult = sharp - imgLaplacian;
        // convert back to 8bits gray scale
        imgResult.convertTo(imgResult, CV_8UC3);
        imgLaplacian.convertTo(imgLaplacian, CV_8UC3);
        // imshow( "Laplace Filtered Image", imgLaplacian );
        imshow( "New Sharped Image", imgResult );

    4.转成灰度图像和二值化。

        // Create binary image from source image
        Mat bw;
        cvtColor(src, bw, CV_BGR2GRAY);
        threshold(bw, bw, 40, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
        imshow("Binary Image", bw);

    5.应用Distance Tranform于二值化的图像。另外,我们通过normalize处理图像。

        // Perform the distance transform algorithm
        Mat dist;
        distanceTransform(bw, dist, CV_DIST_L2, 3);
        // Normalize the distance image for range = {0.0, 1.0}
        // so we can visualize and threshold it
        normalize(dist, dist, 0, 1., NORM_MINMAX);
        imshow("Distance Transform Image", dist);

    6.二值化图像然后执行腐蚀操作。

        // Threshold to obtain the peaks
        // This will be the markers for the foreground objects
        threshold(dist, dist, .4, 1., CV_THRESH_BINARY);
        // Dilate a bit the dist image
        Mat kernel1 = Mat::ones(3, 3, CV_8UC1);
        dilate(dist, dist, kernel1);
        imshow("Peaks", dist);

    7.从每一个小块上创建标记给watershed 算法

        // Create the CV_8U version of the distance image
        // It is needed for findContours()
        Mat dist_8u;
        dist.convertTo(dist_8u, CV_8U);
        // Find total markers
        vector<vector<Point> > contours;
        findContours(dist_8u, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
        // Create the marker image for the watershed algorithm
        Mat markers = Mat::zeros(dist.size(), CV_32SC1);
        // Draw the foreground markers
        for (size_t i = 0; i < contours.size(); i++)
            drawContours(markers, contours, static_cast<int>(i), Scalar::all(static_cast<int>(i)+1), -1);
        // Draw the background marker
        circle(markers, Point(5,5), 3, CV_RGB(255,255,255), -1);
        imshow("Markers", markers*10000);

    8.最后,我们使用watershed算法,并且可视化它。

        // Perform the watershed algorithm
        watershed(src, markers);
        Mat mark = Mat::zeros(markers.size(), CV_8UC1);
        markers.convertTo(mark, CV_8UC1);
        bitwise_not(mark, mark);
    //    imshow("Markers_v2", mark); // uncomment this if you want to see how the mark
                                      // image looks like at that point
        // Generate random colors
        vector<Vec3b> colors;
        for (size_t i = 0; i < contours.size(); i++)
        {
            int b = theRNG().uniform(0, 255);
            int g = theRNG().uniform(0, 255);
            int r = theRNG().uniform(0, 255);
            colors.push_back(Vec3b((uchar)b, (uchar)g, (uchar)r));
        }
        // Create the result image
        Mat dst = Mat::zeros(markers.size(), CV_8UC3);
        // Fill labeled objects with random colors
        for (int i = 0; i < markers.rows; i++)
        {
            for (int j = 0; j < markers.cols; j++)
            {
                int index = markers.at<int>(i,j);
                if (index > 0 && index <= static_cast<int>(contours.size()))
                    dst.at<Vec3b>(i,j) = colors[index-1];
                else
                    dst.at<Vec3b>(i,j) = Vec3b(0,0,0);
            }
        }
        // Visualize the final image
        imshow("Final Result", dst);
    
    展开全文
  • 我们将学习使用分水岭算法实现基于标记的图像分割 我们将看到:cv.watershed() 理论 任何灰度图像都可以看作是一个地形表面,其中高强度表示山峰,低强度表示山谷。你开始用不同颜色的水(标签)填充每个孤立的山谷...
  • marke) cv.waitKey(0) markers = cv.watershed(img,markers) #Watershed算法 第一个参数是图像,第二个参数是标记图 #会根据markers传入的轮廓作为种子(也就是所谓的注水点), #对图像上其他的像素点根据分水岭...
  • 目标在本章中,我们将学习使用分水岭算法实现基于标记的图像分割我们将看到:cv.watershed()理论任何灰度图像都可以看作是一个地形表面,其中高强度...
  • 我们将学习使用分水岭算法实现基于标记的图像分割 - 我们将看到:cv.watershed() 理论 任何灰度图像都可以看作是一个地形表面,其中高强度表示山峰,低强度表示山谷。你开始用不同颜色的水(标签)填充每个孤立的山谷...
  • 目前有很多图像分割方法,其中分水岭算法是一种基于区域的图像分割算法,分水岭算法因实现方便,已经在医疗图像,模式识别等领域得到了广泛的应用。传统分水岭算法基本原理分水岭比较经典的计算方法是L.Vincent于...
  • 目前有很多图像分割方法,其中分水岭算法是一种基于区域的图像分割算法,分水岭算法因实现方便,已经在医疗图像,模式识别等领域得到了广泛的应用。 传统分水岭算法基本原理 分水岭比较经典的计算方法是L.Vincent于...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,949
精华内容 779
关键字:

watershed算法