精华内容
下载资源
问答
  • python+opencv车道线检测(简易实现) 技术栈:python+opencv 实现思路: canny边缘检测获取图中的边缘信息; 霍夫变换寻找图中直线; 绘制梯形感兴趣区域获得车前范围; 得到并绘制车道线; 效果展示: 代码实现:...
  • 基于OpenCV车道线检测

    2018-04-16 11:17:25
    基于VC++6.0的车道线检测代码,对视频进行实时处理,并对车道线进行标注
  • opencv车道线检测

    2020-11-23 10:35:01
    基于Visual Studio 2015,并进行Qt配置Opencv,实现对视频中基于道路特征的 车道线检测方法。
  • 利用opencv开发的车道检测和车辆识别代码,包含源代码、目的代码、演示视频。
  • 主要介绍了opencv车道线检测的实现方法,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
  • 基于opencv车道线检测
  • 基于OPENCV车道线检测,可以用来识别车道线。适合视觉开发及ADAS开发参考。
  • # 步骤1:边缘检测 def canyEdgeDetector(image): edged = cv2.Canny(image, 50, 150) return edged # 步骤2:定义ROI(感兴趣区域) def getROI(image): height = image.shape[0] width = image.shape[1] # ...

    在这里插入图片描述

    import cv2
    import numpy as np
    
    # 步骤1:边缘检测
    def canyEdgeDetector(image):
        edged = cv2.Canny(image, 50, 150)
        return edged
    
    
    # 步骤2:定义ROI(感兴趣区域)
    def getROI(image):
        height = image.shape[0]
        width = image.shape[1]
        # Defining Triangular ROI: The values will change as per your camera mounts
        triangle = np.array([[(100, height), (width, height), (width-300, int(height/1.9))]])
        # creating black image same as that of input image
    
    
        black_image = np.zeros_like(image)
        # Put the Triangular shape on top of our Black image to create a mask
        mask = cv2.fillPoly(black_image, triangle, 255)
        # applying mask on original image
        masked_image = cv2.bitwise_and(image, mask)
        return masked_image
    
    
    # 步骤3:获取图像中的所有直线
    def getLines(image):
        # lines=cv2.HoughLinesP(image,bin_size,precision,threshold,dummy 2d array--no use,minLineLength,maxLineGap)
        # lets take bin size to be 2 pixels
        # lets take precision to be 1 degree= pi/180 radians
        # threshold is the votes that a bin should have to be accepted to draw a line
        # minLineLength --the minimum length in pixels a line should have to be accepted.
        # maxLineGap --the max gap between 2 broken line which we allow for 2 lines to be connected together.
        lines = cv2.HoughLinesP(image, 1, np.pi / 180, 100, np.array([]), minLineLength=70, maxLineGap=20)
        #lines = cv2.HoughLinesP(image, 1, np.pi / 180, 100, 10, 100)
        return lines
    
    
    #display lines over a image
    def displayLines(image, lines):
        if lines is not None:
            for line in lines:
                # print(line) --output like [[704 418 927 641]] this is 2d array representing [[x1,y1,x2,y2]] for each line
                x1, y1, x2, y2 = line.reshape(4)  # converting to 1d array []
    
                # draw line over black image --(255,0,0) tells we want to draw blue line (b,g,r) values 10 is line thickness
                cv2.line(image, (x1, y1), (x2, y2), (255, 0, 0), 10)
        return image
    
    
    
    def getLineCoordinatesFromParameters(image, line_parameters):
        slope = line_parameters[0]
        intercept = line_parameters[1]
        y1 = image.shape[0]  # since line will always start from bottom of image
        y2 = int(y1 * (3.4 / 5))  # some random point at 3/5
        x1 = int((y1 - intercept) / slope)
        x2 = int((y2 - intercept) / slope)
        return np.array([x1, y1, x2, y2])
    
    
    
    #Avergaes all the left and right lines found for a lane and retuns single left and right line for the the lane
    def getSmoothLines(image, lines):
        left_fit = []  # will hold m,c parameters for left side lines
        right_fit = []  # will hold m,c parameters for right side lines
    
        for line in lines:
            x1, y1, x2, y2 = line.reshape(4)
            # polyfit gives slope(m) and intercept(c) values from input points
            # last parameter 1 is for linear..so it will give linear parameters m,c
            parameters = np.polyfit((x1, x2), (y1, y2), 1)
            slope = parameters[0]
            intercept = parameters[1]
    
            if slope < 0:
                left_fit.append((slope, intercept))
            else:
                right_fit.append((slope, intercept))
    
        # take averages of all intercepts and slopes separately and get 1 single value for slope,intercept
        # axis=0 means vertically...see its always (row,column)...so row is always 0 position.
        # so axis 0 means over row(vertically)
        left_fit_average = np.average(left_fit, axis=0)
        right_fit_average = np.average(right_fit, axis=0)
    
        # now we have got m,c parameters for left and right line, we need to know x1,y1 x2,y2 parameters
        left_line = getLineCoordinatesFromParameters(image, left_fit_average)
        right_line = getLineCoordinatesFromParameters(image, right_fit_average)
        return np.array([left_line, right_line])
    
    
    
    def show_image(name, image):
        cv2.imshow(name, image)
        # cv2.waitKey(0)
    
    
    
    image = cv2.imread("lane.jpg") #Load Image
    print(image.shape)
    show_image('image', image)
    
    edged_image = canyEdgeDetector(image)   # Step 1
    print(edged_image.shape)
    show_image('edged_image', edged_image)
    
    roi_image = getROI(edged_image)         # Step 2
    print(roi_image.shape)
    show_image('roi_image', roi_image)
    
    lines = getLines(roi_image)             # Step 3
    print(lines)
    #image_with_lines = displayLines(image, lines)
    
    
    smooth_lines = getSmoothLines(image, lines)    # Step 5
    print(smooth_lines)
    image_with_smooth_lines = displayLines(image, smooth_lines) # Step 4
    
    cv2.imshow("Output", image_with_smooth_lines)
    cv2.waitKey(0)
    
    
    

    在这里插入图片描述
    在这里插入图片描述

    import cv2
    import numpy as np
    
    def canyEdgeDetector(image):
        edged = cv2.Canny(image, 50, 150)
        return edged
    
    
    def getROI(image):
        height = image.shape[0]
        width = image.shape[1]
        print(height, width)  # 720 1280
        # Defining Triangular ROI: The values will change as per your camera mounts
        triangle=np.array([[(200, height), (1100, height), (550, 250)]])
        # creating black image same as that of input image
        black_image = np.zeros_like(image)
        # Put the Triangular shape on top of our Black image to create a mask
        mask = cv2.fillPoly(black_image, triangle, 255)
        # applying mask on original image
        masked_image = cv2.bitwise_and(image, mask)
        return masked_image
    
    
    
    def getLines(image):
        # lines=cv2.HoughLinesP(image,bin_size,precision,threshold,dummy 2d array--no use,minLineLength,maxLineGap)
        # lets take bin size to be 2 pixels
        # lets take precision to be 1 degree= pi/180 radians
        # threshold is the votes that a bin should have to be accepted to draw a line
        # minLineLength --the minimum length in pixels a line should have to be accepted.
        # maxLineGap --the max gap between 2 broken line which we allow for 2 lines to be connected together.
        lines = cv2.HoughLinesP(image, 2, np.pi / 180, 100, np.array([]), minLineLength=40, maxLineGap=5)
        return lines
    
    
    #display lines over a image
    def displayLines(image, lines):
        if lines is not None:
            for line in lines:
                # print(line) --output like [[704 418 927 641]] this is 2d array representing [[x1,y1,x2,y2]] for each line
                x1, y1, x2, y2 = line.reshape(4)  # converting to 1d array []
    
                # draw line over black image --(255,0,0) tells we want to draw blue line (b,g,r) values 10 is line thickness
                cv2.line(image, (x1, y1), (x2, y2), (255, 0, 0), 10)
        return image
    
    
    
    def getLineCoordinatesFromParameters(image, line_parameters):
        slope = line_parameters[0]
        intercept = line_parameters[1]
        y1 = image.shape[0]  # since line will always start from bottom of image
        y2 = int(y1 * (3.4 / 5))  # some random point at 3/5
        x1 = int((y1 - intercept) / slope)
        x2 = int((y2 - intercept) / slope)
        return np.array([x1, y1, x2, y2])
    
    
    
    #Avergaes all the left and right lines found for a lane and retuns single left and right line for the the lane
    def getSmoothLines(image, lines):
        left_fit = []  # will hold m,c parameters for left side lines
        right_fit = []  # will hold m,c parameters for right side lines
    
        for line in lines:
            x1, y1, x2, y2 = line.reshape(4)
            # polyfit gives slope(m) and intercept(c) values from input points
            # last parameter 1 is for linear..so it will give linear parameters m,c
            parameters = np.polyfit((x1, x2), (y1, y2), 1)
            slope = parameters[0]
            intercept = parameters[1]
    
            if slope < 0:
                left_fit.append((slope, intercept))
            else:
                right_fit.append((slope, intercept))
    
        # take averages of all intercepts and slopes separately and get 1 single value for slope,intercept
        # axis=0 means vertically...see its always (row,column)...so row is always 0 position.
        # so axis 0 means over row(vertically)
        left_fit_average = np.average(left_fit, axis=0)
        right_fit_average = np.average(right_fit, axis=0)
    
        # now we have got m,c parameters for left and right line, we need to know x1,y1 x2,y2 parameters
        left_line = getLineCoordinatesFromParameters(image, left_fit_average)
        right_line = getLineCoordinatesFromParameters(image, right_fit_average)
        return np.array([left_line, right_line])
    
    
    
    
    
    
    videoFeed = cv2.VideoCapture("test_video.mp4")
    
    try:
      while videoFeed.isOpened() :
        (status, image) = videoFeed.read()
    
        edged_image = canyEdgeDetector(image)   # Step 1
        roi_image = getROI(edged_image)         # Step 2
    
        lines = getLines(roi_image)             # Step 3
        #image_with_lines = displayLines(image, lines)
    
        smooth_lines = getSmoothLines(image, lines)    # Step 5
        image_with_smooth_lines = displayLines(image, smooth_lines) # Step 4
    
        cv2.imshow("Output", image_with_smooth_lines)
        cv2.waitKey(1)
    
    except:
        pass
    

    在这里插入图片描述

    展开全文
  • OpenCV车道线检测 输入 一张摄像机拍摄到的道路图片,图片中需要包含车道线。如下图所示(可=可以直接将图片另存成jpg格式来使用) 输出 图像坐标系下的左右车道线的直线方程和有效距离。将左右车道线的方程...

    OpenCV车道线检测
    输入
    一张摄像机拍摄到的道路图片,图片中需要包含车道线。如下图所示(可=可以直接将图片另存成jpg格式来使用)
    在这里插入图片描述
    输出
    图像坐标系下的左右车道线的直线方程和有效距离。将左右车道线的方程绘制到原始图像上,应如下图所示。
    在这里插入图片描述
    原始图像
    认识图像前,我们需要先回顾一下在初中所学的物理知识——光的三原色,光的三原色分别是红色(Red)、绿色(Green)和蓝色(Blue)。通过不同比例的三原色组合形成不同的可见光色。如下图所示。在这里插入图片描述
    图像中的每个像素点都是由RGB(红绿蓝)三个颜色通道组成。为了方便描述RGB颜

    展开全文
  • opencv 车道线检测

    热门讨论 2013-12-13 16:47:44
    opencv 基于改进的霍夫变换车道线检测代码 及相关文献
  • OpenCv车道线检测.zip

    2021-08-18 16:11:47
    OpenCv车道线检测.zip
  • opencv 车道线检测(一)

    千次阅读 2018-07-01 10:40:49
  • opencv车道线检测实现

    千次阅读 2018-08-26 20:36:27
    主要opencv函数介绍: CvSeq* cvHoughLines2( CvArr* image, void* line_storage, int method, double rho, double theta, int threshold, double param1=0, double param2=0 ); image 输入 8-比特、单通道 (二值...

    主要opencv函数介绍:

    CvSeq* cvHoughLines2( CvArr* image, void* line_storage, int method, double rho, double theta, int threshold, double param1=0, double param2=0 );

    image

    输入 8-比特、单通道 (二值) 图像,当用CV_HOUGH_PROBABILISTIC方法检测的时候其内容会被函数改变

    line_storage

    检测到的线段存储仓. 可以是内存存储仓 (此种情况下,一个线段序列在存储仓中被创建,并且由函数返回),或者是包含线段参数的特殊类型(见下面)的具有单行/单列的矩阵(CvMat*)。矩阵头为函数所修改,使得它的 cols/rows 将包含一组检测到的线段。如果 line_storage 是矩阵,而实际线段的数目超过矩阵尺寸,那么最大可能数目的线段被返回(对于标准hough变换,线段按照长度降序输出).

    method

    Hough 变换变量,是下面变量的其中之一:

    • CV_HOUGH_STANDARD - 传统或标准 Hough 变换. 每一个线段由两个浮点数 (ρ, θ) 表示,其中 ρ 是直线与原点 (0,0) 之间的距离,θ 线段与 x-轴之间的夹角。因此,矩阵类型必须是 CV_32FC2 type.
    • CV_HOUGH_PROBABILISTIC - 概率 Hough 变换(如果图像包含一些长的线性分割,则效率更高). 它返回线段分割而不是整个线段。每个分割用起点和终点来表示,所以矩阵(或创建的序列)类型是 CV_32SC4.
    • CV_HOUGH_MULTI_SCALE - 传统 Hough 变换的多尺度变种。线段的编码方式与 CV_HOUGH_STANDARD 的一致。

    rho

    与象素相关单位的距离精度

    theta

    弧度测量的角度精度

    threshold

    阈值参数。如果相应的累计值大于 threshold, 则函数返回的这个线段.

    param1

    第一个方法相关的参数:

    • 对传统 Hough 变换,不使用(0).
    • 对概率 Hough 变换,它是最小线段长度.
    • 对多尺度 Hough 变换,它是距离精度 rho 的分母 (大致的距离精度是 rho 而精确的应该是 rho / param1 ).

    param2

    第二个方法相关参数:

    • 对传统 Hough 变换,不使用 (0).
    • 对概率 Hough 变换,这个参数表示在同一条直线上进行碎线段连接的最大间隔值(gap), 即当同一条直线上的两条碎线段之间的间隔小于param2时,将其合二为一。
    • 对多尺度 Hough 变换,它是角度精度 theta 的分母 (大致的角度精度是 theta 而精确的角度应该是 theta / param2).

    代码:完全opencv实现

    #include<cv.h>
    #include<cxcore.h>
    #include<highgui.h>
    
    #include<cstdio>
    #include<iostream>
    using namespace std;
    
    int main(){
    	//声明IplImage指针
    	IplImage* pFrame = NULL;
    	IplImage* pCutFrame = NULL;
    	IplImage* pCutFrImg = NULL;
    	IplImage* pCutBkImg = NULL;
    	//声明CvMat指针
    	CvMat* pCutFrameMat = NULL;
    	CvMat* pCutFrMat = NULL;
    	CvMat* pCutBkMat = NULL;
    	//声明CvCapture指针
    	CvCapture* pCapture = NULL;
    	//声明CvMemStorage和CvSeg指针
    	CvMemStorage* storage = cvCreateMemStorage();
    	CvSeq* lines = NULL;
    	//当前帧数
    	int nFrmNum = 0;
    	//裁剪的天空高度
    	int CutHeight = 250;
    	//窗口命名
    	cvNamedWindow("video", 1);
    	cvNamedWindow("background", 1);
    	cvNamedWindow("foreground", 1);
    	//调整窗口初始位置
    	cvMoveWindow("video", 300, 30);
    	//cvMoveWindow("background", 100, 100);
    	//cvMoveWindow("foreground", 300, 370);
    	//不能打开则退出
    	if (!(pCapture = cvCaptureFromFile("lane.mp4"))){
    		fprintf(stderr, "Can not open video file\n");
    		return -2;
    	}
    	//每次读取一桢的视频
    	while (pFrame = cvQueryFrame(pCapture)){
    		//设置ROI裁剪图像
    		cvSetImageROI(pFrame, cvRect(0, CutHeight, pFrame->width, pFrame->height - CutHeight));
    		nFrmNum++;
    		//第一次要申请内存p
    		if (nFrmNum == 1){
    			pCutFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height - CutHeight), pFrame->depth, pFrame->nChannels);
    			cvCopy(pFrame, pCutFrame, 0);
    			pCutBkImg = cvCreateImage(cvSize(pCutFrame->width, pCutFrame->height), IPL_DEPTH_8U, 1);
    			pCutFrImg = cvCreateImage(cvSize(pCutFrame->width, pCutFrame->height), IPL_DEPTH_8U, 1);
    
    			pCutBkMat = cvCreateMat(pCutFrame->height, pCutFrame->width, CV_32FC1);
    			pCutFrMat = cvCreateMat(pCutFrame->height, pCutFrame->width, CV_32FC1);
    			pCutFrameMat = cvCreateMat(pCutFrame->height, pCutFrame->width, CV_32FC1);
    			//转化成单通道图像再处理
    			cvCvtColor(pCutFrame, pCutBkImg, CV_BGR2GRAY);
    			cvCvtColor(pCutFrame, pCutFrImg, CV_BGR2GRAY);
    			//转换成矩阵???
    			cvConvert(pCutFrImg, pCutFrameMat);
    			cvConvert(pCutFrImg, pCutFrMat);
    			cvConvert(pCutFrImg, pCutBkMat);
    		}
    		else{
    			//获得剪切图
    			cvCopy(pFrame, pCutFrame, 0);
    			//前景图转换为灰度图
    			cvCvtColor(pCutFrame, pCutFrImg, CV_BGR2GRAY);
    			cvConvert(pCutFrImg, pCutFrameMat);
    			//先高斯滤波,以平滑图像
    			cvSmooth(pCutFrameMat, pCutFrameMat, CV_GAUSSIAN, 3, 0, 0.0);
    			//当前帧跟背景图相减
    			cvAbsDiff(pCutFrameMat, pCutBkMat, pCutFrMat);
    			//二值化前景图
    			cvThreshold(pCutFrMat, pCutFrImg, 35, 255.0, CV_THRESH_BINARY);
    			//进行形态学滤波,去掉噪音
    			cvErode(pCutFrImg, pCutFrImg, 0, 1);
    			cvDilate(pCutFrImg, pCutFrImg, 0, 1);
    			//更新背景
    			cvRunningAvg(pCutFrameMat, pCutBkMat, 0.003, 0);
    			//pCutBkMat = cvCloneMat(pCutFrameMat);
    			//将背景转化为图像格式,用以显示
    			cvConvert(pCutBkMat, pCutBkImg);
    			cvCvtColor(pCutFrame, pCutBkImg, CV_BGR2GRAY);
    			//canny变化
    			cvCanny(pCutFrImg, pCutFrImg, 50, 100);
    #pragma region Hough检测
    			lines = cvHoughLines2(pCutFrImg, storage, CV_HOUGH_PROBABILISTIC, 1, CV_PI / 180, 100, 40, 20);
    			printf("Lines number: %d\n", lines->total);
    			//画出直线
    			for (int i = 0; i < lines->total; i++){
    				CvPoint* line = (CvPoint*)cvGetSeqElem(lines, i);
    				cvLine(pCutFrame, line[0], line[1], CV_RGB(255, 0, 0), 6, CV_AA);
    			}
    #pragma endregion
    			//显示图像
    			cvShowImage("video", pCutFrame);
    			cvShowImage("background", pCutBkImg);
    			cvShowImage("foreground", pCutFrImg);
    			//按键事件,空格暂停,其他跳出循环
    			int temp = cvWaitKey(2);
    			if (temp == 32){
    				while (cvWaitKey() == -1);
    			}
    			else if (temp >= 0){
    				break;
    			}
    		}
    		//恢复ROI区域(多余可去掉)
    		cvResetImageROI(pFrame);
    	}
    	//销毁窗口
    	cvDestroyWindow("video");
    	cvDestroyWindow("background");
    	cvDestroyWindow("foreground");
    	//释放图像和矩阵
    	cvReleaseImage(&pCutFrImg);
    	cvReleaseImage(&pCutBkImg);
    	cvReleaseImage(&pCutFrame);
    	cvReleaseMat(&pCutFrameMat);
    	cvReleaseMat(&pCutFrMat);
    	cvReleaseMat(&pCutBkMat);
    	cvReleaseCapture(&pCapture);
    	return 0;
    }

     

    展开全文
  • 基于opencv的道路车道线检测

    千次下载 热门讨论 2014-01-10 16:05:18
    基于opencv的道路车道线检测。 采用了边缘检测法先检测出绘图图像的边缘,再hough直线拟合,拟合出图中的直线。由于这样查找到的直线非常多,所以先筛选掉角度明显有误的直线,在剩下的直线中保留最长的一组。 ...
  • 本程序基于OpenCv给出了一种车道线检测的算法,首先通过OSTU进行二值化处理,随后通过改进的霍夫变化进行车道线检测,具有比较好的效果。
  • 基于OpenCV车道线检测方法

    千次阅读 2019-10-25 18:36:11
    车道线检测是图像处理运用到无人驾驶的一项技术,目前也过渡到了部分汽车上,高速公路的自动车道保持就是一个应用。 最近研究了两个基于车道检测的opencv的代码,先放链接: A.Udacity车道线检测代码 B.基于霍夫变换...
  • 基于opencv车道线检测(c++)

    万次阅读 多人点赞 2019-06-11 17:37:24
    基于opencv车道线检测 原理: 算法基本思想说明: 传统的车道线检测,多数是基于霍夫直线检测,其实这个里面有个很大的误区,霍夫直线拟合容易受到各种噪声干扰,直接运用有时候效果不好,更多的时候通过霍夫直线...
  • 基于Opencv车道线检测

    千次阅读 2018-02-21 11:29:42
    形态学滤波:对二值化图像进行腐蚀,去除噪点,然后对图像进行膨胀,弥补对车道线的腐蚀。ROI提取:提取感兴趣的ROI边缘线检测:canny变化、sobel变化和laplacian变化中选择了效果比较好的canny变化,三者在代码中均...
  • opencv-车道线检测

    2019-11-24 16:24:13
    opencv-车道线检测0.导入数据1.读取图片和视频2.灰度处理3.高斯滤波4.边缘检测5.感兴趣区域检测6.霍夫变换7.图片混合显示视频 今天记录一下简单的车道线检测,为一下几个步骤 0.导入数据 import matplotlib.pyplot ...
  • 使用OpenCV实现车道线检测

    千次阅读 2020-11-10 16:44:57
    本文介绍了使用计算机视觉技术进行车道检测的过程,并引导我们完成识别车道区域、计算道路RoC和估计车道中心距离的步骤。 摄像机校准 几乎所有摄像机使用的镜头在聚焦光线以捕捉图像时都存在一定的误差,因为这些...
  • python opencv车道线检测

    2021-04-20 01:55:53
    python3 opencv车道线检测简介特征提取车道检测鸟瞰图找到车道起点滑动窗口向上扫描多项式拟合鸟瞰图还原到原图评估总结参考 简介 记录一下车道检测的过程。本文使用传统方法对车道线进行检测,主要分为两个部分:...
  • 基于opencv车道线检测,供大家参考,具体内容如下 原理: 算法基本思想说明: 传统的车道线检测,多数是基于霍夫直线检测,其实这个里面有个很大的误区,霍夫直线拟合容易受到各种噪声干扰,直接运用有时候效果...
  • 入门版的车道线检测(python+opencv

    千次阅读 多人点赞 2019-06-19 22:22:45
    入门版的车道线检测参考资料 参考资料 链接1 链接2

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 2,419
精华内容 967
关键字:

opencv车道线检测