精华内容
下载资源
问答
  • python、opencv 双目视觉测距代码

    千次阅读 热门讨论 2020-05-25 19:32:50
    两个部分,一个是相机的参数设置,一个是测距 运用matlab里面的stereo Camera Calibrator APP进行拍照 拍个30多张,然后拉线,留个10-20张进行计算,把双目摄像机的数据填到camera_configs.py里面 camera_...

    傻瓜版,拿个双目摄像头,标定,得到数据,填进去,调调参数。

    两个部分,一个是相机的参数设置,一个是测距

    运用matlab里面的stereo Camera Calibrator APP进行拍照

    拍个30多张,然后拉线,留个10-20张进行计算,把双目摄像机的数据填到camera_configs.py里面

    camera_configs.py如何填写:

    在matlab中输入红色框框内的内容,得到相应的数据,依次填入

    1、输入:stereoParams.CameraParameters1.IntrinsicMatrix,得到数据后填写时注意进行矩阵的转置

     

    2、stereoParams.CameraParameters1.RadialDistortion  和  stereoParams.CameraParameters1.TangentialDistortion

    3、stereoParams.CameraParameters2.IntrinsicMatrix,,得到数据后填写时注意进行矩阵的转置

    4、stereoParams.CameraParameters2.RadialDistortion  和  stereoParams.CameraParameters2.TangentialDistortion

    5、stereoParams.RotationOfCamera2

    6、stereoParams

    camera_configs.py

    import cv2
    import numpy as np
    
    left_camera_matrix = np.array([[  745.7529,    0.1488,  344.5329],
                                    [0,  750.1008,  253.2383],
                                    [0., 0., 1.]])
    left_distortion = np.array([[0.2232,   -1.2455,  -0.0014,    0.0023, -0.2597]])
    
    right_camera_matrix = np.array([[  734.8314,    1.0615,  336.2630],
                                   [        0,  738.2798,  267.4528],
                                   [         0,         0,    1.0000]])
    
    right_distortion = np.array([[0.3381,   -2.4884, 0.0022,    0.0025,4.6913]])
    
    R = np.matrix([
        [ 1.0000,    0.0022,    0.0022],
        [-0.0022,    1.0000,    0.0088],
        [-0.0022,   -0.0088,    1.0000],
    ])
    
    # print(R)
    
    T = np.array([-18.0133, 1.0184, 0.9606]) # 平移关系向量
    
    size = (640, 480) # 图像尺寸
    
    # 进行立体更正
    R1, R2, P1, P2, Q, validPixROI1, validPixROI2 = cv2.stereoRectify(left_camera_matrix, left_distortion,
                                                                      right_camera_matrix, right_distortion, size, R,
                                                                      T)
    # 计算更正map
    left_map1, left_map2 = cv2.initUndistortRectifyMap(left_camera_matrix, left_distortion, R1, P1, size, cv2.CV_16SC2)
    right_map1, right_map2 = cv2.initUndistortRectifyMap(right_camera_matrix, right_distortion, R2, P2, size, cv2.CV_16SC2)

     depth.py

    # 该脚本实现深度图以及点击深度图测量像素点的真实距离
    # 可以运行看到效果之后最好自己重新标定一次
    
    from cv2 import cv2
    import numpy as np
    import camera_configs  # 摄像头的标定数据
    
    
    cam1 = cv2.VideoCapture(1) # 摄像头的ID不同设备上可能不同
    cam2 = cv2.VideoCapture(0) # 摄像头的ID不同设备上可能不同
    # cam1 = cv2.VideoCapture(1 + cv2.CAP_DSHOW)  # 摄像头的ID不同设备上可能不同
    # cam1.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)  # 设置双目的宽度
    # cam1.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)  # 设置双目的高度
    
    # 创建用于显示深度的窗口和调节参数的bar
    cv2.namedWindow("depth")
    cv2.moveWindow("left", 0, 0)
    cv2.moveWindow("right", 600, 0)
    
    # 创建用于显示深度的窗口和调节参数的bar
    # cv2.namedWindow("depth")
    cv2.namedWindow("config", cv2.WINDOW_NORMAL)
    cv2.moveWindow("left", 0, 0)
    cv2.moveWindow("right", 600, 0)
    
    cv2.createTrackbar("num", "config", 0, 60, lambda x: None)
    cv2.createTrackbar("blockSize", "config", 30, 255, lambda x: None)
    cv2.createTrackbar("SpeckleWindowSize", "config", 1, 10, lambda x: None)
    cv2.createTrackbar("SpeckleRange", "config", 1, 255, lambda x: None)
    cv2.createTrackbar("UniquenessRatio", "config", 1, 255, lambda x: None)
    cv2.createTrackbar("TextureThreshold", "config", 1, 255, lambda x: None)
    cv2.createTrackbar("UniquenessRatio", "config", 1, 255, lambda x: None)
    cv2.createTrackbar("MinDisparity", "config", 0, 255, lambda x: None)
    cv2.createTrackbar("PreFilterCap", "config", 1, 65, lambda x: None) # 注意调节的时候这个值必须是奇数
    cv2.createTrackbar("MaxDiff", "config", 1, 400, lambda x: None)
    
    # 添加点击事件,打印当前点的距离
    def callbackFunc(e, x, y, f, p):
        if e == cv2.EVENT_LBUTTONDOWN:
            print(threeD[y][x])
            if abs(threeD[y][x][2]) < 3000:
                print("当前距离:"+str(abs(threeD[y][x][2])))
            else:
                print("当前距离过大或请点击色块的位置")
    cv2.setMouseCallback("depth", callbackFunc, None)
    
    # 初始化计算FPS需要用到参数 注意千万不要用opencv自带fps的函数,那个函数得到的是摄像头最大的FPS
    frame_rate_calc = 1
    freq = cv2.getTickFrequency()
    font = cv2.FONT_HERSHEY_SIMPLEX
    
    imageCount = 1
    
    while True:
        t1 = cv2.getTickCount()
        ret1, frame1 = cam1.read()
        ret1, frame2 = cam2.read()
    
    
        if not ret1:
            print("camera is not connected!")
            break
    
        # 这里的左右两个摄像头的图像是连在一起的,所以进行一下分割
        # frame1 = frame[0:480, 0:640]
        # frame2 = frame[0:480, 640:1280]
    
        ####### 深度图测量开始 #######
        # 立体匹配这里使用BM算法,
        
    
        # 根据标定数据对图片进行重构消除图片的畸变
        img1_rectified = cv2.remap(frame1, camera_configs.left_map1, camera_configs.left_map2, cv2.INTER_LINEAR,
                                   cv2.BORDER_CONSTANT)
        img2_rectified = cv2.remap(frame2, camera_configs.right_map1, camera_configs.right_map2, cv2.INTER_LINEAR,
                                   cv2.BORDER_CONSTANT)
    
        # 如有些版本 remap()的图是反的 这里对角翻转一下
        # img1_rectified = cv2.flip(img1_rectified, -1)
        # img2_rectified = cv2.flip(img2_rectified, -1)
    
        # 将图片置为灰度图,为StereoBM作准备,BM算法只能计算单通道的图片,即灰度图
        # 单通道就是黑白的,一个像素只有一个值如[123],opencv默认的是BGR(注意不是RGB), 如[123,4,134]分别代表这个像素点的蓝绿红的值
        imgL = cv2.cvtColor(img1_rectified, cv2.COLOR_BGR2GRAY)
        imgR = cv2.cvtColor(img2_rectified, cv2.COLOR_BGR2GRAY)
    
        out = np.hstack((img1_rectified, img2_rectified))
        for i in range(0, out.shape[0], 30):
            cv2.line(out, (0, i), (out.shape[1], i), (0, 255, 0), 1)
        cv2.imshow("epipolar lines", out)
    
        # 通过bar来获取到当前的参数
        # BM算法对参数非常敏感,一定要耐心调整适合自己摄像头的参数,前两个参数影响大 后面的参数也要调节
        num = cv2.getTrackbarPos("num", "config")
        SpeckleWindowSize = cv2.getTrackbarPos("SpeckleWindowSize", "config")
        SpeckleRange = cv2.getTrackbarPos("SpeckleRange", "config")
        blockSize = cv2.getTrackbarPos("blockSize", "config")
        UniquenessRatio = cv2.getTrackbarPos("UniquenessRatio", "config")
        TextureThreshold = cv2.getTrackbarPos("TextureThreshold", "config")
        MinDisparity = cv2.getTrackbarPos("MinDisparity", "config")
        PreFilterCap = cv2.getTrackbarPos("PreFilterCap", "config")
        MaxDiff = cv2.getTrackbarPos("MaxDiff", "config")
        if blockSize % 2 == 0:
            blockSize += 1
        if blockSize < 5:
            blockSize = 5
    
        # 根据BM算法生成深度图的矩阵,也可以使用SGBM,SGBM算法的速度比BM慢,但是比BM的精度高
        stereo = cv2.StereoBM_create(
            numDisparities=16 * num,
            blockSize=blockSize,
        )
        stereo.setROI1(camera_configs.validPixROI1)
        stereo.setROI2(camera_configs.validPixROI2)
        stereo.setPreFilterCap(PreFilterCap)
        stereo.setMinDisparity(MinDisparity)
        stereo.setTextureThreshold(TextureThreshold)
        stereo.setUniquenessRatio(UniquenessRatio)
        stereo.setSpeckleWindowSize(SpeckleWindowSize)
        stereo.setSpeckleRange(SpeckleRange)
        stereo.setDisp12MaxDiff(MaxDiff)
    
        # 对深度进行计算,获取深度矩阵
        disparity = stereo.compute(imgL, imgR)
        # 按照深度矩阵生产深度图
        disp = cv2.normalize(disparity, disparity, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
    
        # 将深度图扩展至三维空间中,其z方向的值则为当前的距离
        threeD = cv2.reprojectImageTo3D(disparity.astype(np.float32) / 16., camera_configs.Q)
        # 将深度图转为伪色图,这一步对深度测量没有关系,只是好看而已
        fakeColorDepth = cv2.applyColorMap(disp, cv2.COLORMAP_JET)
    
        cv2.putText(frame1, "FPS: {0:.2f}".format(frame_rate_calc), (30, 50), font, 1, (255, 255, 0), 2, cv2.LINE_AA)
    
        # 按下S可以保存图片
        interrupt = cv2.waitKey(10)
        if interrupt & 0xFF == 27:  # 按下ESC退出程序
            break
        if interrupt & 0xFF == ord('s'):
            cv2.imwrite('images/left' +'.jpg', frame1)
            cv2.imwrite('images/right' +'.jpg', frame2)
            cv2.imwrite('images/img1_rectified' +'.jpg', img1_rectified)#畸变,注意观察正反
            cv2.imwrite('images/img2_rectified' +'.jpg', img2_rectified)
            cv2.imwrite('images/depth' +'.jpg', disp)
            cv2.imwrite('images/fakeColor' +'.jpg', fakeColorDepth)
            cv2.imwrite('mages/epipolar' + '.jpg', out)
    
    
        ####### 任务1:测距结束 #######
    
        # 显示
        # cv2.imshow("frame", frame) # 原始输出,用于检测左右
        cv2.imshow("frame1", frame1) # 左边原始输出
        cv2.imshow("frame2", frame2) # 右边原始输出
        cv2.imshow("img1_rectified", img1_rectified) # 左边矫正后输出
        cv2.imshow("img2_rectified", img2_rectified) # 右边边矫正后输出
        cv2.imshow("depth", disp) # 输出深度图及调整的bar
        cv2.imshow("fakeColor", fakeColorDepth) # 输出深度图的伪色图,这个图没有用只是好看
    
        # 需要对深度图进行滤波将下面几行开启即可 开启后FPS会降低
        img_medianBlur = cv2.medianBlur(disp, 25)
        img_medianBlur_fakeColorDepth = cv2.applyColorMap(img_medianBlur, cv2.COLORMAP_JET)
        img_GaussianBlur = cv2.GaussianBlur(disp, (7, 7), 0)
        img_Blur = cv2.blur(disp, (5, 5))
        cv2.imshow("img_GaussianBlur", img_GaussianBlur) # 右边原始输出
        cv2.imshow("img_medianBlur_fakeColorDepth", img_medianBlur_fakeColorDepth) # 右边原始输出
        cv2.imshow("img_Blur", img_Blur) # 右边原始输出
        cv2.imshow("img_medianBlur", img_medianBlur) # 右边原始输出
    
    
    
        t2 = cv2.getTickCount()
        time1 = (t2 - t1) / freq
        frame_rate_calc = 1 / time1
    
    cam1.release()
    cv2.destroyAllWindows()
    

    如何判断数据有没有填对

     看矫正图,每根极线上对应的点是不是一样的。

    可能问题:1.摄像头左右标反了、

       2. 如有些opencv版本 remap()的图是反的 这里对角翻转一下

        # img1_rectified = cv2.flip(img1_rectified, -1)

        # img2_rectified = cv2.flip(img2_rectified, -1)

     

         3.摄像头输出的是一张图还是两张图,这里的左右两个摄像头的图像是连在一起的,所以进行一下分割

        # frame1 = frame[0:480, 0:640]

        # frame2 = frame[0:480, 640:1280]

    我的是两张图,所以这一段注释了

     

    参数自己看着调,先调前面两个,前面两个出不了距离后面调了也没用

     

    效果:

     在depth窗口点击进行测距

    先这么写着,以后有时间再写具体的

    展开全文
  • opencv实现双目视觉测距

    万次阅读 多人点赞 2017-12-26 11:47:21
    最近一直在研究双目视觉测距,资料真的特别多网上,有matlab 的,python的,C++的,但个人感觉都不详细,对于小白,特别不容易上手,在这里我提供一个傻瓜式教程吧,利用matlab来进行标注,图形界面,无须任何代码,...

    有个群193369905,相关毕设也可找群主,最近一直在研究双目视觉测距,资料真的特别多网上,有matlab 的,python的,C++的,但个人感觉都不详细,对于小白,特别不容易上手,在这里我提供一个傻瓜式教程吧,利用matlab来进行标注,图形界面,无须任何代码,然后利用C++实现测距与深度图,原理太多我就不提了,小白直接照做就OK
    1、准备工作
    硬件准备
    https://item.taobao.com/item.htm?spm=a1z10.1-c-s.w4004-17093912817.2.6af681c0jaZTur&id=562773790704
    摄像头一个(如图),淘宝连接
    这里写图片描述
    软件准备
    VS+opencv3.1
    Matlab+toolbox标定工具箱
    C++代码
    Vs+opencv配置各位见这篇博客 https://www.cnblogs.com/linshuhe/p/5764394.html,讲解的够详细了,我们需要用VS+opencv3.1实现实时测距

    2matlab标定****

    matlab用于单目摄像头,双目摄像头的标定,这个我也已经做过了,各位直接参考这篇博客 http://blog.csdn.net/hyacinthkiss/article/details/41317087
    各位准备好上面那些工具以后我们才能正式开始
    我们通过matlab标定可以获得以下数据

    这里写图片描述

    或者使用C++ 进行单双目标定

    单目标定

    #include <iostream>
    #include <sstream>
    #include <time.h>
    #include <stdio.h>
    #include <fstream>
    
    #include <opencv2/core/core.hpp>
    #include <opencv2/imgproc/imgproc.hpp>
    #include <opencv2/calib3d/calib3d.hpp>
    #include <opencv2/highgui/highgui.hpp>
    
    using namespace cv;
    using namespace std;
    #define calibration
    
    int main()
    {
    #ifdef calibration
    
    	ifstream fin("right_img.txt");             /* 标定所用图像文件的路径 */
    	ofstream fout("caliberation_result_right.txt");  /* 保存标定结果的文件 */
    
    	// 读取每一幅图像,从中提取出角点,然后对角点进行亚像素精确化
    	int image_count = 0;  /* 图像数量 */
    	Size image_size;      /* 图像的尺寸 */
    	Size board_size = Size(11,8);             /* 标定板上每行、列的角点数 */
    	vector<Point2f> image_points_buf;         /* 缓存每幅图像上检测到的角点 */
    	vector<vector<Point2f>> image_points_seq; /* 保存检测到的所有角点 */
    	string filename;      // 图片名
    	vector<string> filenames;
    
    	while (getline(fin, filename))
    	{
    		++image_count;
    		Mat imageInput = imread(filename);
    		filenames.push_back(filename);
    
    		// 读入第一张图片时获取图片大小
    		if (image_count == 1)
    		{
    			image_size.width = imageInput.cols;
    			image_size.height = imageInput.rows;
    		}
    
    		/* 提取角点 */
    		if (0 == findChessboardCorners(imageInput, board_size, image_points_buf))
    		{
    			//cout << "can not find chessboard corners!\n";  // 找不到角点
    			cout << "**" << filename << "** can not find chessboard corners!\n";
    			exit(1);
    		}
    		else
    		{
    			Mat view_gray;
    			cvtColor(imageInput, view_gray, CV_RGB2GRAY);  // 转灰度图
    
    			/* 亚像素精确化 */
    			// image_points_buf 初始的角点坐标向量,同时作为亚像素坐标位置的输出
    			// Size(5,5) 搜索窗口大小
    			// (-1,-1)表示没有死区
    			// TermCriteria 角点的迭代过程的终止条件, 可以为迭代次数和角点精度两者的组合
    			cornerSubPix(view_gray, image_points_buf, Size(5, 5), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
    
    			image_points_seq.push_back(image_points_buf);  // 保存亚像素角点
    
    			/* 在图像上显示角点位置 */
    			drawChessboardCorners(view_gray, board_size, image_points_buf, false); // 用于在图片中标记角点
    
    			imshow("Camera Calibration", view_gray);       // 显示图片
    
    			waitKey(500); //暂停0.5S      
    		}
    	}
    	int CornerNum = board_size.width * board_size.height;  // 每张图片上总的角点数
    
    	//-------------以下是摄像机标定------------------
    
    	/*棋盘三维信息*/
    	Size square_size = Size(60, 60);         /* 实际测量得到的标定板上每个棋盘格的大小 */
    	vector<vector<Point3f>> object_points;   /* 保存标定板上角点的三维坐标 */
    
    	/*内外参数*/
    	Mat cameraMatrix = Mat(3, 3, CV_32FC1, Scalar::all(0));  /* 摄像机内参数矩阵 */
    	vector<int> point_counts;   // 每幅图像中角点的数量
    	Mat distCoeffs = Mat(1, 5, CV_32FC1, Scalar::all(0));       /* 摄像机的5个畸变系数:k1,k2,p1,p2,k3 */
    	vector<Mat> tvecsMat;      /* 每幅图像的旋转向量 */
    	vector<Mat> rvecsMat;      /* 每幅图像的平移向量 */
    
    	/* 初始化标定板上角点的三维坐标 */
    	int i, j, t;
    	for (t = 0; t<image_count; t++)
    	{
    		vector<Point3f> tempPointSet;
    		for (i = 0; i<board_size.height; i++)
    		{
    			for (j = 0; j<board_size.width; j++)
    			{
    				Point3f realPoint;
    
    				/* 假设标定板放在世界坐标系中z=0的平面上 */
    				realPoint.x = i * square_size.width;
    				realPoint.y = j * square_size.height;
    				realPoint.z = 0;
    				tempPointSet.push_back(realPoint);
    			}
    		}
    		object_points.push_back(tempPointSet);
    	}
    
    	/* 初始化每幅图像中的角点数量,假定每幅图像中都可以看到完整的标定板 */
    	for (i = 0; i<image_count; i++)
    	{
    		point_counts.push_back(board_size.width * board_size.height);
    	}
    
    	/* 开始标定 */
    	// object_points 世界坐标系中的角点的三维坐标
    	// image_points_seq 每一个内角点对应的图像坐标点
    	// image_size 图像的像素尺寸大小
    	// cameraMatrix 输出,内参矩阵
    	// distCoeffs 输出,畸变系数
    	// rvecsMat 输出,旋转向量
    	// tvecsMat 输出,位移向量
    	// 0 标定时所采用的算法
    	calibrateCamera(object_points, image_points_seq, image_size, cameraMatrix, distCoeffs, rvecsMat, tvecsMat, 0);
    
    	//------------------------标定完成------------------------------------
    
    	// -------------------对标定结果进行评价------------------------------
    
    	double total_err = 0.0;         /* 所有图像的平均误差的总和 */
    	double err = 0.0;               /* 每幅图像的平均误差 */
    	vector<Point2f> image_points2;  /* 保存重新计算得到的投影点 */
    	fout << "每幅图像的标定误差:\n";
    
    	for (i = 0; i<image_count; i++)
    	{
    		vector<Point3f> tempPointSet = object_points[i];
    
    		/* 通过得到的摄像机内外参数,对空间的三维点进行重新投影计算,得到新的投影点 */
    		projectPoints(tempPointSet, rvecsMat[i], tvecsMat[i], cameraMatrix, distCoeffs, image_points2);
    
    		/* 计算新的投影点和旧的投影点之间的误差*/
    		vector<Point2f> tempImagePoint = image_points_seq[i];
    		Mat tempImagePointMat = Mat(1, tempImagePoint.size(), CV_32FC2);
    		Mat image_points2Mat = Mat(1, image_points2.size(), CV_32FC2);
    
    		for (int j = 0; j < tempImagePoint.size(); j++)
    		{
    			image_points2Mat.at<Vec2f>(0, j) = Vec2f(image_points2[j].x, image_points2[j].y);
    			tempImagePointMat.at<Vec2f>(0, j) = Vec2f(tempImagePoint[j].x, tempImagePoint[j].y);
    		}
    		err = norm(image_points2Mat, tempImagePointMat, NORM_L2);
    		total_err += err /= point_counts[i];
    		fout << "第" << i + 1 << "幅图像的平均误差:" << err << "像素" << endl;
    	}
    	fout << "总体平均误差:" << total_err / image_count << "像素" << endl << endl;
    
    	//-------------------------评价完成---------------------------------------------
    
    	//-----------------------保存定标结果------------------------------------------- 
    	Mat rotation_matrix = Mat(3, 3, CV_32FC1, Scalar::all(0));  /* 保存每幅图像的旋转矩阵 */
    	fout << "相机内参数矩阵:" << endl;
    	fout << cameraMatrix << endl << endl;
    	fout << "畸变系数:\n";
    	fout << distCoeffs << endl << endl << endl;
    	for (int i = 0; i<image_count; i++)
    	{
    		fout << "第" << i + 1 << "幅图像的旋转向量:" << endl;
    		fout << tvecsMat[i] << endl;
    
    		/* 将旋转向量转换为相对应的旋转矩阵 */
    		Rodrigues(tvecsMat[i], rotation_matrix);
    		fout << "第" << i + 1 << "幅图像的旋转矩阵:" << endl;
    		fout << rotation_matrix << endl;
    		fout << "第" << i + 1 << "幅图像的平移向量:" << endl;
    		fout << rvecsMat[i] << endl << endl;
    	}
    	fout << endl;
    
    	//--------------------标定结果保存结束-------------------------------
    
    	//----------------------显示定标结果--------------------------------
    
    	Mat mapx = Mat(image_size, CV_32FC1);
    	Mat mapy = Mat(image_size, CV_32FC1);
    	Mat R = Mat::eye(3, 3, CV_32F);
    	string imageFileName;
    	std::stringstream StrStm;
    	for (int i = 0; i != image_count; i++)
    	{
    		initUndistortRectifyMap(cameraMatrix, distCoeffs, R, cameraMatrix, image_size, CV_32FC1, mapx, mapy);
    		Mat imageSource = imread(filenames[i]);
    		Mat newimage = imageSource.clone();
    		remap(imageSource, newimage, mapx, mapy, INTER_LINEAR);
    		StrStm.clear();
    		imageFileName.clear();
    		StrStm << i + 1;
    		StrStm >> imageFileName;
    		imageFileName += "_d.jpg";
    		imwrite(imageFileName, newimage);
    	}
    
    	fin.close();
    	fout.close();
    
    #else 
    		/// 读取一副图片,不改变图片本身的颜色类型(该读取方式为DOS运行模式)
    		Mat src = imread("F:\\lane_line_detection\\left_img\\1.jpg");
    		Mat distortion = src.clone();
    		Mat camera_matrix = Mat(3, 3, CV_32FC1);
    		Mat distortion_coefficients;
    
    
    		//导入相机内参和畸变系数矩阵
    		FileStorage file_storage("F:\\lane_line_detection\\left_img\\Intrinsic.xml", FileStorage::READ);
    		file_storage["CameraMatrix"] >> camera_matrix;
    		file_storage["Dist"] >> distortion_coefficients;
    		file_storage.release();
    
    		//矫正
    		cv::undistort(src, distortion, camera_matrix, distortion_coefficients);
    
    		cv::imshow("img", src);
    		cv::imshow("undistort", distortion);
    		cv::imwrite("undistort.jpg", distortion);
    
    		cv::waitKey(0);
    #endif // DEBUG
    	return 0;
    }
    
    
    

    双目标定

    //双目相机标定 
    #include <opencv2/core/core.hpp>
    #include <opencv2/imgproc/imgproc.hpp>
    #include <opencv2/calib3d/calib3d.hpp>
    #include <opencv2/highgui/highgui.hpp>
    
    #include <vector>
    #include <string>
    #include <algorithm>
    #include <iostream>
    #include <iterator>
    #include <stdio.h>
    #include <stdlib.h>
    #include <ctype.h>
    
    #include <opencv2/opencv.hpp>
    //#include <cv.h>
    //#include <cv.hpp>
    
    using namespace std;
    using namespace cv;
    //摄像头的分辨率
    const int imageWidth = 640;
    const int imageHeight = 480;
    //横向的角点数目
    const int boardWidth = 11;
    //纵向的角点数目
    const int boardHeight = 8;
    //总的角点数目
    const int boardCorner = boardWidth * boardHeight;
    //相机标定时需要采用的图像帧数
    const int frameNumber = 8;
    //标定板黑白格子的大小 单位是mm
    const int squareSize = 60;
    //标定板的总内角点
    const Size boardSize = Size(boardWidth, boardHeight);
    Size imageSize = Size(imageWidth, imageHeight);
    
    Mat R, T, E, F;
    //R旋转矢量 T平移矢量 E本征矩阵 F基础矩阵
    vector<Mat> rvecs; //R
    vector<Mat> tvecs; //T
    //左边摄像机所有照片角点的坐标集合
    vector<vector<Point2f>> imagePointL;
    //右边摄像机所有照片角点的坐标集合
    vector<vector<Point2f>> imagePointR;
    //各图像的角点的实际的物理坐标集合
    vector<vector<Point3f>> objRealPoint;
    //左边摄像机某一照片角点坐标集合
    vector<Point2f> cornerL;
    //右边摄像机某一照片角点坐标集合
    vector<Point2f> cornerR;
    
    Mat rgbImageL, grayImageL;
    Mat rgbImageR, grayImageR;
    
    Mat intrinsic;
    Mat distortion_coeff;
    //校正旋转矩阵R,投影矩阵P,重投影矩阵Q
    Mat Rl, Rr, Pl, Pr, Q;
    //映射表
    Mat mapLx, mapLy, mapRx, mapRy;
    Rect validROIL, validROIR;
    //图像校正之后,会对图像进行裁剪,其中,validROI裁剪之后的区域
    /*事先标定好的左相机的内参矩阵
    fx 0 cx
    0 fy cy
    0  0  1
    */
    Mat cameraMatrixL = (Mat_<double>(3,3) << 271.7792785637638, 0, 313.4559554347688,
    	0, 271.9513066781816, 232.7561625477742,
    	0, 0, 1);
    //获得的畸变参数
    Mat distCoeffL = (Mat_<double>(5,1) << -0.3271838086967946, 0.1326861805365006, -0.0008527407221595511, -0.0003398213328658643, -0.02847446149341753);
    /*事先标定好的右相机的内参矩阵
    fx 0 cx
    0 fy cy
    0  0  1
    */
    Mat cameraMatrixR = (Mat_<double>(3,3) << 268.4990780091891, 0, 325.75156647688,
    	0, 269.7906504513069, 212.5928387210573,
    	0, 0, 1);
    Mat distCoeffR = (Mat_<double>(5,1) << -0.321298212260166, 0.1215100334221875, -0.0007504391036193558, -1.732473939234179e-05, -0.02234659175488724);
    
    /*计算标定板上模块的实际物理坐标*/
    void calRealPoint(vector<vector<Point3f>>& obj, int boardWidth, int boardHeight, int imgNumber, int squareSize)
    {
        vector<Point3f> imgpoint;
        for (int rowIndex = 0; rowIndex < boardHeight; rowIndex++)
        {
            for (int colIndex = 0; colIndex < boardWidth; colIndex++)
            {
                imgpoint.push_back(Point3f(rowIndex * squareSize, colIndex * squareSize, 0));
            }
        }
        for (int imgIndex = 0; imgIndex < imgNumber; imgIndex++)
        {
            obj.push_back(imgpoint);
        }
    }
    
    
    
    void outputCameraParam(void)
    {
    	/*保存数据*/
    	/*输出数据*/
    	FileStorage fs("intrisics.yml", FileStorage::WRITE);
    	if (fs.isOpened())
    	{
    		fs << "cameraMatrixL" << cameraMatrixL << "cameraDistcoeffL" << distCoeffL << "cameraMatrixR" << cameraMatrixR << "cameraDistcoeffR" << distCoeffR;
    		fs.release();
    		cout << "cameraMatrixL=:" << cameraMatrixL << endl << "cameraDistcoeffL=:" << distCoeffL << endl << "cameraMatrixR=:" << cameraMatrixR << endl << "cameraDistcoeffR=:" << distCoeffR << endl;
    	}
    	else
    	{
    		cout << "Error: can not save the intrinsics!!!!" << endl;
    	}
    
    	fs.open("extrinsics.yml", FileStorage::WRITE);
    	if (fs.isOpened())
    	{
    		fs << "R" << R << "T" << T << "Rl" << Rl << "Rr" << Rr << "Pl" << Pl << "Pr" << Pr << "Q" << Q;
    		cout << "R=" << R << endl << "T=" << T << endl << "Rl=" << Rl << endl << "Rr" << Rr << endl << "Pl" << Pl << endl << "Pr" << Pr << endl << "Q" << Q << endl;
    		fs.release();
    	}
    	else
    	{
    		cout << "Error: can not save the extrinsic parameters\n";
    	}
    
    }
    
    
    int main(int argc, char* argv[])
    {
        Mat img;
        int goodFrameCount = 0;
        while (goodFrameCount < frameNumber)
        {
            char filename[100];
            /*读取左边的图像*/
            sprintf(filename, "/home/crj/calibration/left_img/left%d.jpg", goodFrameCount + 1);
    		
            rgbImageL = imread(filename, CV_LOAD_IMAGE_COLOR);
    		imshow("chessboardL", rgbImageL);
            cvtColor(rgbImageL, grayImageL, CV_BGR2GRAY);
            /*读取右边的图像*/
            sprintf(filename, "/home/crj/calibration/right_img/right%d.jpg", goodFrameCount + 1);
            rgbImageR = imread(filename, CV_LOAD_IMAGE_COLOR);
            cvtColor(rgbImageR, grayImageR, CV_BGR2GRAY);
    
            bool isFindL, isFindR;
            isFindL = findChessboardCorners(rgbImageL, boardSize, cornerL);
            isFindR = findChessboardCorners(rgbImageR, boardSize, cornerR);
            if (isFindL == true && isFindR == true)
            {
                cornerSubPix(grayImageL, cornerL, Size(5,5), Size(-1,1), TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
                drawChessboardCorners(rgbImageL, boardSize, cornerL, isFindL);
                imshow("chessboardL", rgbImageL);
                imagePointL.push_back(cornerL);
    
                cornerSubPix(grayImageR, cornerR, Size(5,5), Size(-1,-1), TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 20, 0.1));
                drawChessboardCorners(rgbImageR, boardSize, cornerR, isFindR);
                imshow("chessboardR", rgbImageR);
                imagePointR.push_back(cornerR);
    
                goodFrameCount++;
                cout << "the image" << goodFrameCount << " is good" << endl;
            }
            else
            {
                cout << "the image is bad please try again" << endl;
            }
            if (waitKey(10) == 'q')
            {
                break;
            }
        }
    
        //计算实际的校正点的三维坐标,根据实际标定格子的大小来设置
        calRealPoint(objRealPoint, boardWidth, boardHeight, frameNumber, squareSize);
        cout << "cal real successful" << endl;
    
        //标定摄像头
        double rms = stereoCalibrate(objRealPoint, imagePointL, imagePointR,
            cameraMatrixL, distCoeffL,
            cameraMatrixR, distCoeffR,
            Size(imageWidth, imageHeight), R, T, E, F, CALIB_USE_INTRINSIC_GUESS,
            TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, 1e-5));
    
        cout << "Stereo Calibration done with RMS error = " << rms << endl;
    
        stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, 
            Rr, Pl, Pr, Q, CALIB_ZERO_DISPARITY, -1, imageSize, &validROIL,&validROIR);
        
    
        //摄像机校正映射
        initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pl, imageSize, CV_32FC1, mapLx, mapLy);
        initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, imageSize, CV_32FC1, mapRx, mapRy);
    
        Mat rectifyImageL, rectifyImageR;
        cvtColor(grayImageL, rectifyImageL, CV_GRAY2BGR);
        cvtColor(grayImageR, rectifyImageR, CV_GRAY2BGR);
    
        imshow("Recitify Before", rectifyImageL);
        cout << "按Q1退出..." << endl;
        //经过remap之后,左右相机的图像已经共面并且行对准了
        Mat rectifyImageL2, rectifyImageR2;
        remap(rectifyImageL, rectifyImageL2, mapLx, mapLy, INTER_LINEAR);
        remap(rectifyImageR, rectifyImageR2, mapRx, mapRy, INTER_LINEAR);
        cout << "按Q2退出..." << endl;
    
        imshow("rectifyImageL", rectifyImageL2);
        imshow("rectifyImageR", rectifyImageR2);
    
        outputCameraParam();
    
        //显示校正结果
        Mat canvas;
        double sf;
        int w,h;
        sf = 600. / MAX(imageSize.width, imageSize.height);
        w = cvRound(imageSize.width * sf);
        h = cvRound(imageSize.height * sf);
        canvas.create(h, w*2, CV_8UC3);
    
        //左图像画到画布上
        Mat canvasPart = canvas(Rect(0, 0, w, h));
        resize(rectifyImageL2, canvasPart, canvasPart.size(), 0, 0, INTER_AREA);
        Rect vroiL(cvRound(validROIL.x*sf), cvRound(validROIL.y*sf),
            cvRound(validROIL.width*sf), cvRound(validROIL.height*sf));
        rectangle(canvasPart, vroiL, Scalar(0, 0, 255), 3, 8);
    
        cout << "Painted ImageL" << endl;
    
        //右图像画到画布上
        canvasPart = canvas(Rect(w, 0, w, h));
        resize(rectifyImageR2, canvasPart, canvasPart.size(), 0, 0, INTER_LINEAR);
        Rect vroiR(cvRound(validROIR.x*sf), cvRound(validROIR.y*sf),
            cvRound(validROIR.width*sf), cvRound(validROIR.height*sf));
        rectangle(canvasPart, vroiR, Scalar(0, 255, 0), 3, 8);
    
        cout << "Painted ImageR" << endl;
    
        //画上对应的线条
        for (int i = 0; i < canvas.rows; i += 16)
            line(canvas, Point(0, i), Point(canvas.cols, i), Scalar(0, 255, 0), 1, 8);
        
        imshow("rectified", canvas);
        
        cout << "wait key" << endl;
        waitKey(0);
        return 0;
    }
    
    

    **3 **C++与opencv实现测距 ****
    我们把matlab数据填写到下列代码中,各位切记不要填写错误,以免导致出现离奇的数据,我已经注释的非常清楚了;

    /*
    事先标定好的相机的参数
    fx 0 cx
    0 fy cy
    0 0  1
    */
    Mat cameraMatrixL = (Mat_<double>(3, 3) << 682.55880, 0, 384.13666,
        0, 682.24569, 311.19558,
        0, 0, 1);
    //对应matlab里的左相机标定矩阵
    Mat distCoeffL = (Mat_<double>(5, 1) << -0.51614, 0.36098, 0.00523, -0.00225, 0.00000);
    //对应Matlab所得左i相机畸变参数
    
    Mat cameraMatrixR = (Mat_<double>(3, 3) << 685.03817, 0, 397.39092,
        0, 682.54282, 272.04875,
        0, 0, 1);
    //对应matlab里的右相机标定矩阵
    
    Mat distCoeffR = (Mat_<double>(5, 1) << -0.46640, 0.22148, 0.00947, -0.00242, 0.00000);
    //对应Matlab所得右相机畸变参数
    
    Mat T = (Mat_<double>(3, 1) << -61.34485, 2.89570, -4.76870);//T平移向量
                                                        //对应Matlab所得T参数
    Mat rec = (Mat_<double>(3, 1) << -0.00306, -0.03207, 0.00206);//rec旋转向量,对应matlab om参数
    Mat R;//R 旋转矩阵
    
    

    4、完整代码 (我这里以上述摄像头拍摄的两张图来作为测距,同样可修改作为视频的实时测距,一定要将图片拷贝到你的工程目录下)

    
    /******************************/
    /*        立体匹配和测距        */
    /******************************/
    
    #include <opencv2/opencv.hpp>  
    #include <iostream>  
    
    using namespace std;
    using namespace cv;
    
    const int imageWidth = 800;                             //摄像头的分辨率  
    const int imageHeight = 600;
    Size imageSize = Size(imageWidth, imageHeight);
    
    Mat rgbImageL, grayImageL;
    Mat rgbImageR, grayImageR;
    Mat rectifyImageL, rectifyImageR;
    
    Rect validROIL;//图像校正之后,会对图像进行裁剪,这里的validROI就是指裁剪之后的区域  
    Rect validROIR;
    
    Mat mapLx, mapLy, mapRx, mapRy;     //映射表  
    Mat Rl, Rr, Pl, Pr, Q;              //校正旋转矩阵R,投影矩阵P 重投影矩阵Q
    Mat xyz;              //三维坐标
    
    Point origin;         //鼠标按下的起始点
    Rect selection;      //定义矩形选框
    bool selectObject = false;    //是否选择对象
    
    int blockSize = 0, uniquenessRatio = 0, numDisparities = 0;
    Ptr<StereoBM> bm = StereoBM::create(16, 9);
    
    /*
    事先标定好的相机的参数
    fx 0 cx
    0 fy cy
    0 0  1
    */
    Mat cameraMatrixL = (Mat_<double>(3, 3) << 682.55880, 0, 384.13666,
    	0, 682.24569, 311.19558,
    	0, 0, 1);
    //对应matlab里的左相机标定矩阵
    Mat distCoeffL = (Mat_<double>(5, 1) << -0.51614, 0.36098, 0.00523, -0.00225, 0.00000);
    //对应Matlab所得左i相机畸变参数
    
    Mat cameraMatrixR = (Mat_<double>(3, 3) << 685.03817, 0, 397.39092,
    	0, 682.54282, 272.04875,
    	0, 0, 1);
    //对应matlab里的右相机标定矩阵
    
    Mat distCoeffR = (Mat_<double>(5, 1) << -0.46640, 0.22148, 0.00947, -0.00242, 0.00000);
    //对应Matlab所得右相机畸变参数
    
    Mat T = (Mat_<double>(3, 1) << -61.34485, 2.89570, -4.76870);//T平移向量
                                                        //对应Matlab所得T参数
    Mat rec = (Mat_<double>(3, 1) << -0.00306, -0.03207, 0.00206);//rec旋转向量,对应matlab om参数
    Mat R;//R 旋转矩阵
    
    
    	  /*****立体匹配*****/
    void stereo_match(int, void*)
    {
    	bm->setBlockSize(2 * blockSize + 5);     //SAD窗口大小,5~21之间为宜
    	bm->setROI1(validROIL);
    	bm->setROI2(validROIR);
    	bm->setPreFilterCap(31);
    	bm->setMinDisparity(0);  //最小视差,默认值为0, 可以是负值,int型
    	bm->setNumDisparities(numDisparities * 16 + 16);//视差窗口,即最大视差值与最小视差值之差,窗口大小必须是16的整数倍,int型
    	bm->setTextureThreshold(10);
    	bm->setUniquenessRatio(uniquenessRatio);//uniquenessRatio主要可以防止误匹配
    	bm->setSpeckleWindowSize(100);
    	bm->setSpeckleRange(32);
    	bm->setDisp12MaxDiff(-1);
    	Mat disp, disp8;
    	bm->compute(rectifyImageL, rectifyImageR, disp);//输入图像必须为灰度图
    	disp.convertTo(disp8, CV_8U, 255 / ((numDisparities * 16 + 16)*16.));//计算出的视差是CV_16S格式
    	reprojectImageTo3D(disp, xyz, Q, true); //在实际求距离时,ReprojectTo3D出来的X / W, Y / W, Z / W都要乘以16(也就是W除以16),才能得到正确的三维坐标信息。
    	xyz = xyz * 16;
    	imshow("disparity", disp8);
    }
    
    /*****描述:鼠标操作回调*****/
    static void onMouse(int event, int x, int y, int, void*)
    {
    	if (selectObject)
    	{
    		selection.x = MIN(x, origin.x);
    		selection.y = MIN(y, origin.y);
    		selection.width = std::abs(x - origin.x);
    		selection.height = std::abs(y - origin.y);
    	}
    
    	switch (event)
    	{
    	case EVENT_LBUTTONDOWN:   //鼠标左按钮按下的事件
    		origin = Point(x, y);
    		selection = Rect(x, y, 0, 0);
    		selectObject = true;
    		cout << origin << "in world coordinate is: " << xyz.at<Vec3f>(origin) << endl;
    		break;
    	case EVENT_LBUTTONUP:    //鼠标左按钮释放的事件
    		selectObject = false;
    		if (selection.width > 0 && selection.height > 0)
    			break;
    	}
    }
    
    
    /*****主函数*****/
    int main()
    {
    	/*
    	立体校正
    	*/
    	Rodrigues(rec, R); //Rodrigues变换
    	stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, Rr, Pl, Pr, Q, CALIB_ZERO_DISPARITY,
    		0, imageSize, &validROIL, &validROIR);
    	initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pr, imageSize, CV_32FC1, mapLx, mapLy);
    	initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, imageSize, CV_32FC1, mapRx, mapRy);
    
    	/*
    	读取图片
    	*/
    	rgbImageL = imread("left.bmp", CV_LOAD_IMAGE_COLOR);
    	cvtColor(rgbImageL, grayImageL, CV_BGR2GRAY);
    	rgbImageR = imread("right.bmp", CV_LOAD_IMAGE_COLOR);
    	cvtColor(rgbImageR, grayImageR, CV_BGR2GRAY);
    
    	imshow("ImageL Before Rectify", grayImageL);
    	imshow("ImageR Before Rectify", grayImageR);
    
    	/*
    	经过remap之后,左右相机的图像已经共面并且行对准了
    	*/
    	remap(grayImageL, rectifyImageL, mapLx, mapLy, INTER_LINEAR);
    	remap(grayImageR, rectifyImageR, mapRx, mapRy, INTER_LINEAR);
    
    	/*
    	把校正结果显示出来
    	*/
    	Mat rgbRectifyImageL, rgbRectifyImageR;
    	cvtColor(rectifyImageL, rgbRectifyImageL, CV_GRAY2BGR);  //伪彩色图
    	cvtColor(rectifyImageR, rgbRectifyImageR, CV_GRAY2BGR);
    
    	//单独显示
    	//rectangle(rgbRectifyImageL, validROIL, Scalar(0, 0, 255), 3, 8);
    	//rectangle(rgbRectifyImageR, validROIR, Scalar(0, 0, 255), 3, 8);
    	imshow("ImageL After Rectify", rgbRectifyImageL);
    	imshow("ImageR After Rectify", rgbRectifyImageR);
    
    	//显示在同一张图上
    	Mat canvas;
    	double sf;
    	int w, h;
    	sf = 600. / MAX(imageSize.width, imageSize.height);
    	w = cvRound(imageSize.width * sf);
    	h = cvRound(imageSize.height * sf);
    	canvas.create(h, w * 2, CV_8UC3);   //注意通道
    
    										//左图像画到画布上
    	Mat canvasPart = canvas(Rect(w * 0, 0, w, h));                                //得到画布的一部分  
    	resize(rgbRectifyImageL, canvasPart, canvasPart.size(), 0, 0, INTER_AREA);     //把图像缩放到跟canvasPart一样大小  
    	Rect vroiL(cvRound(validROIL.x*sf), cvRound(validROIL.y*sf),                //获得被截取的区域    
    		cvRound(validROIL.width*sf), cvRound(validROIL.height*sf));
    	//rectangle(canvasPart, vroiL, Scalar(0, 0, 255), 3, 8);                      //画上一个矩形  
    	cout << "Painted ImageL" << endl;
    
    	//右图像画到画布上
    	canvasPart = canvas(Rect(w, 0, w, h));                                      //获得画布的另一部分  
    	resize(rgbRectifyImageR, canvasPart, canvasPart.size(), 0, 0, INTER_LINEAR);
    	Rect vroiR(cvRound(validROIR.x * sf), cvRound(validROIR.y*sf),
    		cvRound(validROIR.width * sf), cvRound(validROIR.height * sf));
    	//rectangle(canvasPart, vroiR, Scalar(0, 0, 255), 3, 8);
    	cout << "Painted ImageR" << endl;
    
    	//画上对应的线条
    	for (int i = 0; i < canvas.rows; i += 16)
    		line(canvas, Point(0, i), Point(canvas.cols, i), Scalar(0, 255, 0), 1, 8);
    	imshow("rectified", canvas);
    
    	/*
    	立体匹配
    	*/
    	namedWindow("disparity", CV_WINDOW_AUTOSIZE);
    	// 创建SAD窗口 Trackbar
    	createTrackbar("BlockSize:\n", "disparity", &blockSize, 8, stereo_match);
    	// 创建视差唯一性百分比窗口 Trackbar
    	createTrackbar("UniquenessRatio:\n", "disparity", &uniquenessRatio, 50, stereo_match);
    	// 创建视差窗口 Trackbar
    	createTrackbar("NumDisparities:\n", "disparity", &numDisparities, 16, stereo_match);
    	//鼠标响应函数setMouseCallback(窗口名称, 鼠标回调函数, 传给回调函数的参数,一般取0)
    	setMouseCallback("disparity", onMouse, 0);
    	stereo_match(0, 0);
    
    	waitKey(0);
    	return 0;
    }
    

    如果对此代码和标定过程,或者遇到什么困难,大家可发邮件到1039463596@qq.com,或者加入到 193369905进行讨论,谢谢大家,原创不易,欢迎打赏
    在这里插入图片描述

    展开全文
  • 通过双目视觉测距.zip

    2020-07-09 16:03:52
    通过双目视觉测距Python代码,可以运行。
  • (C++代码)用sift算法实现双目立体视觉测距,建议在opencv2版本下运行。
  • 视觉双目测距

    2017-12-13 17:24:15
    关于视觉双目的测量,网上虽然有很多资料,但是但是你懂的,网上很多资源都讲的很模糊,不完整。我这个代码完整的计算出了深度信息。前提是你标定作准了。
  • 在完成双目标定后,使用matlab的双目标定结果,通过本文代码实现双目匹配以及测距功能。 一共有6个参数供opencv调用, camera1的内参stereoParams.CameraParameters1.IntrinsicMatrix,需要转置一下才能给opencv用 ...

    基于opencv双目校正、匹配以及双目测距

    在完成双目标定后,使用matlab的双目标定结果,通过本文代码实现双目匹配以及测距功能。

    一共有6个参数供opencv调用,
    camera1的内参stereoParams.CameraParameters1.IntrinsicMatrix,需要转置一下才能给opencv用
    camera1畸变,
    camera2的内参stereoParams.CameraParameters1.IntrinsicMatrix,需要转置一下才能给opencv用
    camera2畸变,
    camera2相对于camera1的旋转矩阵R,
    平移向量T,
    RadialDistortion为径向畸变K1,K2,K3,TangentialDistortion为切向畸变P1,P2,opencv调用时,写成K1,K2,P1,P2,K3的形式。
    其中stereoParams.RotationOfCamera2需要转置一下才能给opencv用

    在这里插入图片描述
    IntrinsicMatrix为相机内参
    转置后使用
    同样,转置后使用
    双目校正以及双目测距程序

    const int imageWidth = 1280;                             //摄像头的分辨率  
    const int imageHeight = 960;
    Size imageSize = Size(imageWidth, imageHeight);
    
    Mat rgbImageL, grayImageL;
    Mat rgbImageR, grayImageR;
    Mat rectifyImageL, rectifyImageR;
    
    Rect validROIL;                    //图像校正之后,会对图像进行裁剪,这里的validROI就是指裁剪之后的区域  
    Rect validROIR;
    
    Mat mapLx, mapLy, mapRx, mapRy;     //映射表  
    Mat Rl, Rr, Pl, Pr, Q;              //校正旋转矩阵R,投影矩阵P 重投影矩阵Q
    Mat xyz;                            //三维坐标
    
    Point origin;                       //鼠标按下的起始点
    Rect selection;                     //定义矩形选框
    bool selectObject = false;          //是否选择对象
    
    int blockSize = 0, uniquenessRatio = 0, numDisparities = 0;
    Ptr<StereoBM> bm = StereoBM::create(16, 9);
    
    Mat cameraMatrixL = (Mat_<double>(3, 3) << 2144.06346549771, 0.100643435575966, 646.043075091062,
    	0, 2143.99089450596, 464.484668545036,
    	0, 0, 1);
    //对应matlab里的左相机标定矩阵
    Mat distCoeffL = (Mat_<double>(5, 1) << -0.104172724212748, 0.631426634642516, -0.000140339755577950, -2.84532653431087e-05, -2.98168604169012);
    //对应Matlab所得左i相机畸变参数
    
    Mat cameraMatrixR = (Mat_<double>(3, 3) << 2138.90347672270, 0.0977396849787843, 616.264883906150,
    	0, 2138.45223485518, 487.443540617963,
    	0, 0, 1);
    //对应matlab里的右相机标定矩阵
    
    Mat distCoeffR = (Mat_<double>(5, 1) << -0.107664131896022, 0.931919012650515, 0.000771517217186294, -0.00222846348625879, -6.73584559954540);
    //对应Matlab所得右相机畸变参数
    
    Mat T = (Mat_<double>(3, 1) << 217.566795926254, -1.47347980033928, 40.6236428586625);//T平移向量
    															                          //对应Matlab所得T参数
    //Mat rec = (Mat_<double>(3, 1) << -0.00306, -0.03207, 0.00206);                      //rec旋转向量,对应matlab om参数
    Mat R = (Mat_<double>(3, 3) << 0.922572822248496, 0.00184459869643134, -0.385818590925962,
    	-0.000707735126960054, 0.999994979714303, 0.00308863354582006,
    	0.385822351295821, -0.00257643199782606, 0.922569496156644);                      //R 旋转矩阵
    
    
    	  /*****立体匹配*****/
    void stereo_match(int, void*)
    {
    	bm->setBlockSize(2 * blockSize + 5);     //SAD窗口大小,5~21之间为宜
    	bm->setROI1(validROIL);
    	bm->setROI2(validROIR);
    	bm->setPreFilterCap(31);
    	bm->setMinDisparity(0);  //最小视差,默认值为0, 可以是负值,int型
    	bm->setNumDisparities(numDisparities * 16 + 16);//视差窗口,即最大视差值与最小视差值之差,窗口大小必须是16的整数倍,int型
    	bm->setTextureThreshold(10);
    	bm->setUniquenessRatio(uniquenessRatio);//uniquenessRatio主要可以防止误匹配
    	bm->setSpeckleWindowSize(100);
    	bm->setSpeckleRange(32);
    	bm->setDisp12MaxDiff(-1);
    	Mat disp, disp8;
    	bm->compute(rectifyImageL, rectifyImageR, disp);//输入图像必须为灰度图
    	disp.convertTo(disp8, CV_8U, 255 / ((numDisparities * 16 + 16)*16.));//计算出的视差是CV_16S格式
    	reprojectImageTo3D(disp, xyz, Q, true); //在实际求距离时,ReprojectTo3D出来的X / W, Y / W, Z / W都要乘以16(也就是W除以16),才能得到正确的三维坐标信息。
    	xyz = xyz * 16;
    	imshow("disparity", disp8);
    }
    
    /*****描述:鼠标操作回调*****/
    static void onMouse(int event, int x, int y, int, void*)
    {
    	if (selectObject)
    	{
    		selection.x = MIN(x, origin.x);
    		selection.y = MIN(y, origin.y);
    		selection.width = std::abs(x - origin.x);
    		selection.height = std::abs(y - origin.y);
    	}
    
    	switch (event)
    	{
    	case EVENT_LBUTTONDOWN:   //鼠标左按钮按下的事件
    		origin = Point(x, y);
    		selection = Rect(x, y, 0, 0);
    		selectObject = true;
    		cout << origin << "in world coordinate is: " << xyz.at<Vec3f>(origin) << endl;
    		break;
    	case EVENT_LBUTTONUP:    //鼠标左按钮释放的事件
    		selectObject = false;
    		if (selection.width > 0 && selection.height > 0)
    			break;
    	}
    }
    
    
    /*****主函数*****/
    int main()
    {
    	/*
    	立体校正
    	*/
    	//Rodrigues(rec, R); //Rodrigues变换
    	stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, Rr, Pl, Pr, Q, CALIB_ZERO_DISPARITY,
    		0, imageSize, &validROIL, &validROIR);
    	initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pr, imageSize, CV_32FC1, mapLx, mapLy);
    	initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, imageSize, CV_32FC1, mapRx, mapRy);
    
    	/*
    	读取图片
    	*/
    	rgbImageL = imread("l1.bmp", CV_LOAD_IMAGE_COLOR);
    	cvtColor(rgbImageL, grayImageL, CV_BGR2GRAY);
    	rgbImageR = imread("r1.bmp", CV_LOAD_IMAGE_COLOR);
    	cvtColor(rgbImageR, grayImageR, CV_BGR2GRAY);
    
    	imshow("ImageL Before Rectify", grayImageL);
    	imshow("ImageR Before Rectify", grayImageR);
    
    	/*
    	经过remap之后,左右相机的图像已经共面并且行对准了
    	*/
    	remap(grayImageL, rectifyImageL, mapLx, mapLy, INTER_LINEAR);
    	remap(grayImageR, rectifyImageR, mapRx, mapRy, INTER_LINEAR);
    
    	/*
    	把校正结果显示出来
    	*/
    	Mat rgbRectifyImageL, rgbRectifyImageR;
    	cvtColor(rectifyImageL, rgbRectifyImageL, CV_GRAY2BGR);  //伪彩色图
    	cvtColor(rectifyImageR, rgbRectifyImageR, CV_GRAY2BGR);
    
    	//单独显示
    	//rectangle(rgbRectifyImageL, validROIL, Scalar(0, 0, 255), 3, 8);
    	//rectangle(rgbRectifyImageR, validROIR, Scalar(0, 0, 255), 3, 8);
    	imshow("ImageL After Rectify", rgbRectifyImageL);
    	imshow("ImageR After Rectify", rgbRectifyImageR);
    
    	//显示在同一张图上
    	Mat canvas;
    	double sf;
    	int w, h;
    	sf = 600. / MAX(imageSize.width, imageSize.height);
    	w = cvRound(imageSize.width * sf);
    	h = cvRound(imageSize.height * sf);
    	canvas.create(h, w * 2, CV_8UC3);   //注意通道
    
    										//左图像画到画布上
    	Mat canvasPart = canvas(Rect(w * 0, 0, w, h));                                 //得到画布的一部分  
    	resize(rgbRectifyImageL, canvasPart, canvasPart.size(), 0, 0, INTER_AREA);     //把图像缩放到跟canvasPart一样大小  
    	Rect vroiL(cvRound(validROIL.x*sf), cvRound(validROIL.y*sf),                   //获得被截取的区域    
    		cvRound(validROIL.width*sf), cvRound(validROIL.height*sf));
    	//rectangle(canvasPart, vroiL, Scalar(0, 0, 255), 3, 8);                       //画上一个矩形  
    	cout << "Painted ImageL" << endl;
    
    	//右图像画到画布上
    	canvasPart = canvas(Rect(w, 0, w, h));                                         //获得画布的另一部分  
    	resize(rgbRectifyImageR, canvasPart, canvasPart.size(), 0, 0, INTER_LINEAR);
    	Rect vroiR(cvRound(validROIR.x * sf), cvRound(validROIR.y*sf),
    		cvRound(validROIR.width * sf), cvRound(validROIR.height * sf));
    	//rectangle(canvasPart, vroiR, Scalar(0, 0, 255), 3, 8);
    	cout << "Painted ImageR" << endl;
    
    	//画上对应的线条
    	for (int i = 0; i < canvas.rows; i += 16)
    		line(canvas, Point(0, i), Point(canvas.cols, i), Scalar(0, 255, 0), 1, 8);
    	imshow("rectified", canvas);
    
    	/*
    	立体匹配
    	*/
    	namedWindow("disparity", CV_WINDOW_AUTOSIZE);
    	// 创建SAD窗口 Trackbar
    	createTrackbar("BlockSize:\n", "disparity", &blockSize, 8, stereo_match);
    	// 创建视差唯一性百分比窗口 Trackbar
    	createTrackbar("UniquenessRatio:\n", "disparity", &uniquenessRatio, 50, stereo_match);
    	// 创建视差窗口 Trackbar
    	createTrackbar("NumDisparities:\n", "disparity", &numDisparities, 16, stereo_match);
    	//鼠标响应函数setMouseCallback(窗口名称, 鼠标回调函数, 传给回调函数的参数,一般取0)
    	setMouseCallback("disparity", onMouse, 0);
    	stereo_match(0, 0);
    
    	waitKey(0);
    	return 0;
    }
    
    
    展开全文
  • 文章名称:3-D Point Cloud ...文章(英文)详细讲解了双目视觉的一些基本原理,已经如何使用两个普通的网络摄像头来实现双目视觉,以及如何根据两个摄像头来计算物体深度信息。 代码为文章中附带的代码 仅供参考学习
  • 最近对双目视觉的应用挺感兴趣的,之前又完成了双目的标定,刚好可以把数据拿过来使用,继续做相关实验,实验里的代码很多都是参考大神的,这里也分享下这些链接:...

    最近对双目视觉的应用挺感兴趣的,之前又完成了双目的标定,刚好可以把数据拿过来使用,继续做相关实验,实验里的代码很多都是参考大神的,这里也分享下这些链接:

    https://blog.csdn.net/weixin_39449570/article/details/79033314

    https://blog.csdn.net/Loser__Wang/article/details/52836042

    由于opencv用的版本是2.4.13,使用上跟3.0以上版本还是有所区别的,2.4.13可参考

    https://blog.csdn.net/chentravelling/article/details/53672578

    3.0以上版本StereoBM等定义为纯虚类,不能直接实例化,可参考

    https://blog.csdn.net/chentravelling/article/details/70254682

    下面简单说下视觉差的原理:


    其中,Tx为两相机光心间的距离,P(Xc,Yc,Zc)为左相机坐标系下的相机坐标,xl为P在左相机像平面X轴坐标,xr为P在右相机像平面X轴坐标(这里只考虑相机在同一水平位置且焦距相同),根据成像原理有

                                                                    

    将右相机转换到左相机坐标系下,则P在相机2的坐标转换到相机1后的坐标为P(Xc-Tx,Yc,Zc),模型如下:



    视差:

                                                           

    所以在这中简单的模型下只有知道视差d就可以求得距离Zc

    接下来就是上代码了,在使用代码前需要进行双目标定,将相机的内外参数求出来:

    #include <stdio.h>  
    #include <iostream>
    #include "opencv2/calib3d/calib3d.hpp"  
    #include "opencv2/imgproc/imgproc.hpp"  
    #include "opencv2/highgui/highgui.hpp"  
    #include "opencv2/contrib/contrib.hpp" 
    
    using namespace std;
    using namespace cv;
    
    enum { STEREO_BM = 0, 
    	STEREO_SGBM = 1, 
    	STEREO_HH = 2, 
    	STEREO_VAR = 3 };
    
    Size imageSize = Size(640, 480);
    Point origin;       //鼠标按下的起始点
    Rect selection;     //定义矩形选框
    bool selectObject = false;
    Mat xyz;              //三维坐标
    
    static void saveXYZ(const char* filename, const Mat& mat)
    {
    	const double max_z = 1.0e4;
    	FILE* fp = fopen(filename, "wt");
    	for (int y = 0; y < mat.rows; y++)
    	{
    		for (int x = 0; x < mat.cols; x++)
    		{
    			Vec3f point = mat.at<Vec3f>(y, x);
    			if (fabs(point[2] - max_z) < FLT_EPSILON || fabs(point[2]) > max_z) continue;
    			fprintf(fp, "%f %f %f\n", point[0], point[1], point[2]);
    		}
    	}
    	fclose(fp);
    }
    
    /*给深度图上色*/
    void GenerateFalseMap(cv::Mat &src, cv::Mat &disp)
    {
    	// color map  
    	float max_val = 255.0f;
    	float map[8][4] = { { 0, 0, 0, 114 }, { 0, 0, 1, 185 }, { 1, 0, 0, 114 }, { 1, 0, 1, 174 },
    	{ 0, 1, 0, 114 }, { 0, 1, 1, 185 }, { 1, 1, 0, 114 }, { 1, 1, 1, 0 } };
    	float sum = 0;
    	for (int i = 0; i < 8; i++)
    		sum += map[i][3];
    
    	float weights[8]; // relative   weights  
    	float cumsum[8];  // cumulative weights  
    	cumsum[0] = 0;
    	for (int i = 0; i < 7; i++) {
    		weights[i] = sum / map[i][3];
    		cumsum[i + 1] = cumsum[i] + map[i][3] / sum;
    	}
    
    	int height_ = src.rows;
    	int width_ = src.cols;
    	// for all pixels do  
    	for (int v = 0; v < height_; v++) {
    		for (int u = 0; u < width_; u++) {
    
    			// get normalized value  
    			float val = std::min(std::max(src.data[v*width_ + u] / max_val, 0.0f), 1.0f);
    
    			// find bin  
    			int i;
    			for (i = 0; i < 7; i++)
    				if (val < cumsum[i + 1])
    					break;
    
    			// compute red/green/blue values  
    			float   w = 1.0 - (val - cumsum[i])*weights[i];
    			uchar r = (uchar)((w*map[i][0] + (1.0 - w)*map[i + 1][0]) * 255.0);
    			uchar g = (uchar)((w*map[i][1] + (1.0 - w)*map[i + 1][1]) * 255.0);
    			uchar b = (uchar)((w*map[i][2] + (1.0 - w)*map[i + 1][2]) * 255.0);
    			//rgb内存连续存放  
    			disp.data[v*width_ * 3 + 3 * u + 0] = b;
    			disp.data[v*width_ * 3 + 3 * u + 1] = g;
    			disp.data[v*width_ * 3 + 3 * u + 2] = r;
    		}
    	}
    }
    
    /*****描述:鼠标操作回调*****/
    static void onMouse(int event, int x, int y, int, void*)
    {
    	if (selectObject)
    	{
    		selection.x = MIN(x, origin.x);
    		selection.y = MIN(y, origin.y);
    		selection.width = std::abs(x - origin.x);
    		selection.height = std::abs(y - origin.y);
    	}
    
    	switch (event)
    	{
    	case EVENT_LBUTTONDOWN:   //鼠标左按钮按下的事件
    		origin = Point(x, y);
    		selection = Rect(x, y, 0, 0);
    		selectObject = true;
    		cout << origin << "in world coordinate is: " << xyz.at<Vec3f>(origin) << endl;
    		break;
    	case EVENT_LBUTTONUP:    //鼠标左按钮释放的事件
    		selectObject = false;
    		if (selection.width > 0 && selection.height > 0)
    			break;
    	}
    }
    
    void readCamParam(string& filename, Mat& camL_Matrix, Mat& camL_distcoeff, 
    				Mat& camR_Matrix, Mat& camR_distcoeff,Mat& R,Mat& T)
    {
    	FileStorage fs(filename, FileStorage::READ);
    	if (!fs.isOpened())
    	{
    		cout << "there is not the param file!" << endl;
    	}
    	if (fs.isOpened())
    	{
    		camL_Matrix = Mat(3, 3, CV_64F);
    		fs["cam1_Matrix"] >> camL_Matrix;
    		camL_distcoeff = Mat(3, 1, CV_64F);
    		fs["cam1_distcoeff"] >> camL_distcoeff;
    
    		camR_Matrix = Mat(3, 3, CV_64F);
    		fs["cam2_Matrix"] >> camR_Matrix;
    		camR_distcoeff = Mat(3, 1, CV_64F);
    		fs["cam2_distcoeff"] >> camR_distcoeff;
    
    		R = Mat(3, 3, CV_64F);
    		fs["R"] >> R;
    		T = Mat(3, 1, CV_64F);
    		fs["T"] >> T;
    	}
    }
    
    void images2one(Size& imageSize, Mat& rectifyImageL, Rect& validROIL,
    			Mat& rectifyImageR, Rect& validROIR)
    {
    	//显示在同一张图上
    	Mat canvas;
    	double sf;
    	int w, h;
    	sf = 700. / MAX(imageSize.width, imageSize.height);
    	w = cvRound(imageSize.width * sf);         //cvRound输入一个double类型返回一个整形
    	h = cvRound(imageSize.height * sf);
    	canvas.create(h, w * 2, CV_8UC3);   //注意通道
    
    	//左图像画到画布上
    	Mat canvasPart = canvas(Rect(w * 0, 0, w, h));                                //得到画布的一部分  
    	resize(rectifyImageL, canvasPart, canvasPart.size(), 0, 0, INTER_AREA);     //把图像缩放到跟canvasPart一样大小  
    	Rect vroiL(cvRound(validROIL.x*sf), cvRound(validROIL.y*sf),                //获得被截取的区域    
    		cvRound(validROIL.width*sf), cvRound(validROIL.height*sf));
    	rectangle(canvasPart, vroiL, Scalar(0, 0, 255), 3, 8);                      //画上一个矩形  
    	cout << "Painted ImageL" << endl;
    
    	//右图像画到画布上
    	canvasPart = canvas(Rect(w, 0, w, h));                                      //获得画布的另一部分  
    	resize(rectifyImageR, canvasPart, canvasPart.size(), 0, 0, INTER_LINEAR);
    	Rect vroiR(cvRound(validROIR.x * sf), cvRound(validROIR.y*sf),
    		cvRound(validROIR.width * sf), cvRound(validROIR.height * sf));
    	rectangle(canvasPart, vroiR, Scalar(0, 0, 255), 3, 8);
    	cout << "Painted ImageR" << endl;
    
    	//画上对应的线条
    	for (int i = 0; i < canvas.rows; i += 16)
    		line(canvas, Point(0, i), Point(canvas.cols, i), Scalar(0, 255, 0), 1, 8);
    	imshow("rectified", canvas);
    }
    
    void stereoSGBM_match(int alg, Mat& imgL, Mat& imgR, Mat& disp8, Mat& dispf)
    {
    	int SADWindowSize = 0, numberOfDisparities = 0;
    	float scale = 1.f;
    
    	StereoBM bm;
    	StereoSGBM sgbm;
    	StereoVar var;
    
    
    	Size img_size = imgL.size();
    
    	Rect roi1, roi2;
    
    	numberOfDisparities = numberOfDisparities > 0 ? numberOfDisparities : ((img_size.width / 8) + 15) & -16;
    
    	bm.state->roi1 = roi1;
    	bm.state->roi2 = roi2;
    	bm.state->preFilterCap = 31;
    	bm.state->SADWindowSize = SADWindowSize > 0 ? SADWindowSize : 9;
    	bm.state->minDisparity = 0;
    	bm.state->numberOfDisparities = numberOfDisparities;
    	bm.state->textureThreshold = 10;
    	bm.state->uniquenessRatio = 15;
    	bm.state->speckleWindowSize = 100;
    	bm.state->speckleRange = 32;
    	bm.state->disp12MaxDiff = 1;
    
    	sgbm.preFilterCap = 63;
    	sgbm.SADWindowSize = SADWindowSize > 0 ? SADWindowSize : 3;
    
    	int cn = imgL.channels();
    
    	sgbm.P1 = 8 * cn*sgbm.SADWindowSize*sgbm.SADWindowSize;
    	sgbm.P2 = 32 * cn*sgbm.SADWindowSize*sgbm.SADWindowSize;
    	sgbm.minDisparity = 0;
    	sgbm.numberOfDisparities = numberOfDisparities;
    	sgbm.uniquenessRatio = 10;
    	sgbm.speckleWindowSize = bm.state->speckleWindowSize;
    	sgbm.speckleRange = bm.state->speckleRange;
    	sgbm.disp12MaxDiff = 1;
    	sgbm.fullDP = alg == STEREO_HH;
    
    	var.levels = 3;                                 // ignored with USE_AUTO_PARAMS  
    	var.pyrScale = 0.5;                             // ignored with USE_AUTO_PARAMS  
    	var.nIt = 25;
    	var.minDisp = -numberOfDisparities;
    	var.maxDisp = 0;
    	var.poly_n = 3;
    	var.poly_sigma = 0.0;
    	var.fi = 15.0f;
    	var.lambda = 0.03f;
    	var.penalization = var.PENALIZATION_TICHONOV;   // ignored with USE_AUTO_PARAMS  
    	var.cycle = var.CYCLE_V;                        // ignored with USE_AUTO_PARAMS  
    	var.flags = var.USE_SMART_ID | var.USE_AUTO_PARAMS | var.USE_INITIAL_DISPARITY | var.USE_MEDIAN_FILTERING;
    
    	Mat disp ;
    	//去黑边
    	Mat img1p, img2p;
    	copyMakeBorder(imgL, img1p, 0, 0, numberOfDisparities, 0, IPL_BORDER_REPLICATE);
    	copyMakeBorder(imgR, img2p, 0, 0, numberOfDisparities, 0, IPL_BORDER_REPLICATE);
    	imshow("img1p", img1p);
    	imshow("img2p", img2p);
    
    	int64 t = getTickCount();
    	if (alg == STEREO_BM)
    		bm(imgL, imgR, disp);
    	else if (alg == STEREO_VAR) {
    		var(imgL, imgR, disp);
    	}
    	else if (alg == STEREO_SGBM || alg == STEREO_HH)
    		sgbm(imgL, imgR, disp);//------  
    
    	t = getTickCount() - t;
    	printf("Time elapsed: %fms\n", t * 1000 / getTickFrequency());
    
    	dispf = disp.colRange(numberOfDisparities, img2p.cols - numberOfDisparities);
    
    	if (alg != STEREO_VAR)
    		dispf.convertTo(disp8, CV_8U, 255 / (numberOfDisparities*16.));
    	else
    		dispf.convertTo(disp8, CV_8U);
    }
    
    int main()
    {
    	int alg = STEREO_SGBM;
    	int color_mode = alg == STEREO_BM ? 0 : -1;
    
    	string filename = "calibrateResult.xml";
    	Mat camL_Matrix, camR_Matrix, camL_distcoeff, camR_distcoeff, R, T;
    	Mat Rl, Rr, Pl,Pr,Q;       //校正旋转矩阵R,投影矩阵P 重投影矩阵Q
    	Rect validROIL, validROIR;
    	readCamParam(filename, camL_Matrix, camL_distcoeff, camR_Matrix, camR_distcoeff, R, T);
    	stereoRectify(camL_Matrix, camL_distcoeff, camR_Matrix, camR_distcoeff, imageSize, R, T, Rl, Rr, Pl, Pr, Q, CALIB_ZERO_DISPARITY,
    		0, imageSize, &validROIL, &validROIR);
    	Mat mapLx, mapLy, mapRx, mapRy;     //映射表 
    	initUndistortRectifyMap(camL_Matrix, camL_distcoeff, Rl, Pl, imageSize, CV_16SC2, mapLx, mapLy);
    	initUndistortRectifyMap(camR_Matrix, camR_distcoeff, Rr, Pr, imageSize, CV_16SC2, mapRx, mapRy);
    
    	Mat imgL = imread("L.bmp", color_mode);
    	Mat imgR = imread("R.bmp", color_mode);
    
    	Mat rectifyImageL, rectifyImageR;
    	remap(imgL, rectifyImageL, mapLx, mapLy, INTER_LINEAR);
    	remap(imgR, rectifyImageR, mapRx, mapRy, INTER_LINEAR);
    	imshow("rectifyImageL", rectifyImageL);
    	imshow("rectifyImageR", rectifyImageR);
    
    	//显示在同一张图上
    	images2one(imageSize, rectifyImageL, validROIL, rectifyImageR, validROIR);
    
    	namedWindow("color_disparity", CV_WINDOW_NORMAL);
    	setMouseCallback("color_disparity", onMouse, 0);
    
    
    	bool no_display = false;
    	Mat disp8,dispf;
    	stereoSGBM_match(alg, imgL, imgR, disp8, dispf);
    	
    
    	reprojectImageTo3D(dispf, xyz, Q, true);    //在实际求距离时,ReprojectTo3D出来的X / W, Y / W, Z / W都要乘以16(也就是W除以16),才能得到正确的三维坐标信息。
    	xyz = xyz * 16;
    	if (!no_display)
    	{
    		namedWindow("left", 1);
    		imshow("left", imgL);
    
    		namedWindow("right", 1);
    		imshow("right", imgR);
    
    		namedWindow("disparity", 0);
    		imshow("disparity", disp8);
    
    		Mat color(dispf.size(), CV_8UC3);
    		GenerateFalseMap(disp8, color);//转成彩图
    		imshow("color_disparity", color);
    
    		waitKey(500);
    		printf("press any key to continue...");
    		fflush(stdout);
    		waitKey();
    		printf("\n");
    	}	
    	saveXYZ("xyz.xls", xyz);
    	return 0;
    }

    下面为代码的链接:

    https://download.csdn.net/download/logan_lin/10345775

    展开全文
  • 这是我的课程设计论文,里面的主要内容是关于双目立体视觉测距的原理和代码,效果图等。
  • sift 双目测距

    2018-08-27 15:33:56
    (C++代码)用sift算法实现双目立体视觉测距,建议在opencv2版本下运行。
  • 基于双目立体视觉的图像匹配与测距Python代码,可以运行。
  • 转载 Joe_quan的 ...目录说明双目测距原理opencv实现双目测距的原理双目测距代码说明双目测距的代码和实现接下来 1 说明 怕以后忘了,现在总结一下前一段时间一直在弄的,有关双目
  • 双目视觉,点云原理

    2019-05-07 20:57:05
    双目视觉,点云原理: http://www.elecfans.com/d/863829.html 单目视觉测距代码: https://www.cnblogs.com/fpzs/p/9513932.html
  • Evision双目视觉关于双目视觉的一些总结相机模型标定视差算法:立体匹配重投影:测量,三维重建,重投影约束三维重建示例程序 ...这个程序是"基于OpenCV的双目测距",他的主要代码来自于邹宇华老师的OpenCV例程,我只...
  • 提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档 文章目录前言一、什么是拍照测距...目前的进度是能在PC上利用双目进行测距,近距离双目测距精度还行,后续需要将代码移植至树莓派B4板子上,由树莓
  • 原文: ... 三种匹配算法比较 BM算法: 该算法代码: view plaincopy to clipboardprint? CvStereoBMState *BMState = cvCreateStereoBMState(); int SADWindowSize=15; 
  • 这是大约一个多星期前完成了大部分的框架,现在为了更好的试验,又进行了些修改。修改,应该还是没有结束的,现在,有个毛病,呆呆的看代码,就是看不进去,所以,打算,一边写这...一般双目视觉用于测距,会用到的是带
  • 目录说明之前文章中的双目测距代码效果更好的双目视觉代码效果更好的双目视觉代码的实现 1 标定过程2 测距过程 一些问题以及解决方法要说的 1 说明 我之前写过一篇文章《完全基于opencv的双目景深...
  • 双目测距代码说明 双目测距的代码和实现 接下来 1 说明 怕以后忘了,现在总结一下前一段时间一直在弄的,有关双目视觉的东西。 双目视觉的原理网上有很多,我只简单记录一下我对于这个的理解。 具体的实现主要...

空空如也

空空如也

1 2 3
收藏数 42
精华内容 16
关键字:

双目视觉测距代码