精华内容
下载资源
问答
  • ORBSLAM2_Dynamic For rubust localization in dynamic environment. 1. References ORB-SLAM2 DS-SLAM darknet-YOLOv4 2. Prerequisites The same as ORB-SLAM2. The dynamic library libdarknet.so is ...
  • OrbSlam2-reconstruction:OrbSlam2加入pmvs和重建
  • 编译好后,输入./Examples/Monocular/mono_kitti ./Vocabulary/ORBvoc.txt ./Examples/Monocular/KITTI00-02.yaml dataset/2011_09_26_drive_0029_sync/image_03/data/无反应
  • orbslam2 with LK optical flow This project is modified from orbslam2. All dependencies are consistent with orbslam2, but only support RGBD camera now This project: 1、improve the speed of orbslam2 2...
  • ORB SLAM

    2017-07-03 19:34:05
    ORB-SLAM(一)简介 ORB-SLAM是一种基于ORB特征的三维定位与地图构建算法(SLAM)[1]。该算法由Raul Mur-Artal,J. M. M. Montiel和Juan D. Tardos于2015年发表在IEEE Transactions on Robotics。ORB-...

    ORB-SLAM(一)简介

    ORB-SLAM是一种基于ORB特征的三维定位与地图构建算法(SLAM)[1]。该算法由Raul Mur-Artal,J. M. M. Montiel和Juan D. Tardos于2015年发表在IEEE Transactions on Robotics。ORB-SLAM基于PTAM架构,增加了地图初始化和闭环检测的功能,优化了关键帧选取和地图构建的方法,在处理速度、追踪效果和地图精度上都取得了不错的效果。要注意ORB-SLAM构建的地图是稀疏的。

    ORB-SLAM一开始基于monocular camera,后来扩展到Stereo和RGB-D sensor上。作者好像还会做Semi-dense mapping的扩展。作者的开源代码都在GIT上[2]。

    ORB-SLAM算法的一大特点是在所有步骤统一使用图像的ORB特征。ORB特征是一种非常快速的特征提取方法,具有旋转不变性,并可以利用金字塔构建出尺度不变性。使用统一的ORB特征有助于SLAM算法在特征提取与追踪、关键帧选取、三维重建、闭环检测等步骤具有内生的一致性。

    ORB-SLAM架构如下:

    ORB-SLAM利用三个线程分别进行追踪、地图构建和闭环检测。

    一、追踪

    1. ORB特征提取
    2. 初始姿态估计(速度估计)
    3. 姿态优化(Track local map,利用邻近的地图点寻找更多的特征匹配,优化姿态)
    4. 选取关键帧

    二、地图构建

    1. 加入关键帧(更新各种图)
    2. 验证最近加入的地图点(去除Outlier)
    3. 生成新的地图点(三角法)
    4. 局部Bundle adjustment(该关键帧和邻近关键帧,去除Outlier)
    5. 验证关键帧(去除重复帧)

    三、闭环检测

    1. 选取相似帧(bag of words)
    2. 检测闭环(计算相似变换(3D<->3D,存在尺度漂移,因此是相似变换),RANSAC计算内点数)
    3. 融合三维点,更新各种图
    4. 图优化(传导变换矩阵),更新地图所有点

    作者提供了ORB-SLAM在New College Data[3]上的时间统计,如下图。

    1. 追踪部分,平均每帧约30毫秒,基本达到了30fps。特征提取速度是非常快的,平均11毫秒左右,非常适合于实时SLAM。姿态估计稍微耗时一些,平均需要20毫秒,特别是姿态优化需要耗费16毫秒的时间。

    2. 地图构建部分,平均每关键帧约385毫秒。其中生成新的点约70毫秒,Local BA约300毫秒,相对还是比较耗时的。不知道这两部分还有没有优化的空间。

    [1] ORB-SLAM: A Versatile and Accurate Monocular SLAM System

    [2] https://github.com/raulmur/ORB_SLAM2

    [3] http://www.robots.ox.ac.uk/NewCollegeData/

    展开全文
  • ORBSLAM24Windows: ORBSLAM2 Project 4(for) Windows Platform

    千次阅读 热门讨论 2019-04-24 23:20:14
    ORBSLAM24WindowsORBSLAM2 Project 4(for) Windows PlatformEasy built orbslam2 by visual studio on windows of both debug and release modeProject Page: ORBSLAM24WindowsThanks Original ORBSLAM2 project: ...

    ORBSLAM24Windows
    ORBSLAM2 Project 4(for) Windows Platform

    Easy built orbslam2 by visual studio on windows of both debug and release mode

    Project Page: ORBSLAM24Windows

    Thanks

    Prerequisite

    1. OpenCV
    • Version is not required, but not too old. In this tutorial is 2.4.13.
    • Add YOUR_OWN_PATH\opencv\build; YOUR_OWN_PATH\opencv\build\x64\vc12\bin; to your environment variable “PATH”, you can also add YOUR_OWN_PATH\opencv\build\x86\vc12\bin; if you want to bulid a x86 type application.
    1. Cmake
    • Version should at least be 2.8.
    1. Visual Studio
    • In this tutorial is VS2013(Corresponding to opencv’s vc12).

    So, we’ll build a visual studio 2013 project of ORB_SLAM2 using cmake and then make a x64 app.

    Steps

    First, we’ll compile the projects in Thirdparty folder.

    DBoW2

    1. Open cmake-gui, select DBow2 folder as the source path and the DBow2/build folder as the binaries path.
    2. Click configure, select Visual Studio 12 2013 Win64(or your own) as the generator, click finish.
    3. After configure done, click Generate.
    4. Go to the DBow2/build folder, double click the DBoW2.sln to open the peoject.
    5. Build ALL_BUILD in either debug or release mode you want.
    6. After success build, the libraries will be in the lib folder of the DBow2 project source folder.

    eigen

    eigen is not need to be built

    g2o

    1. Open cmake-gui, select g2o folder as the source path and the g2o/build folder as the binaries path.
    2. Click configure, select Visual Studio 12 2013 Win64(or your own) as the generator, click finish.
    3. After configure done, click Generate.
    4. Go to the g2o/build folder, double click the g2o.sln to open the peoject.
    5. Right click on the g2o project->Properties->C/C+±>Preprocessor Definitions, add WINDOWS at the end row, click Apply and OK.
    6. Build ALL_BUILD in either debug or release mode you want. (Remind to repeat step 5 && Mode should be the same as DBoW2)
    7. After success build, the libraries will be in the lib folder of the g2o project source folder.

    Pangolin

    1. Open cmake-gui, select Pangolin folder as the source path and the Pangolin/build folder as the binaries path.
    2. Click configure, select Visual Studio 12 2013 Win64(or your own) as the generator, click finish.
    3. After configure done, click Generate.
    4. Go to the Pangolin/build folder, double click the Pangolin.sln to open the peoject.
    5. Build ALL_BUILD in either debug or release mode you want. (Mode should be the same as DBoW2 && g2o).
    6. You’ll get an error of “cannot open input file ‘pthread.lib’”, just ignore it.
    7. After success build, the libraries will be in the lib folder of the Pangolin project source folder.

    ORBSLAM24Windows

    1. Open cmake-gui, select ORBSLAM24Windows folder as the source path and the ORBSLAM24Windows/build folder as the binaries path.
    2. Click configure, select Visual Studio 12 2013 Win64(or your own) as the generator, click finish.
    3. After configure done, click Generate.
    4. Go to the ORBSLAM24Windows/build folder, double click the ORB_SLAM2.sln to open the peoject.
    5. Choose either debug or release mode you want. (Mode should be the same as DBoW2 && g2o && Pangolin).
    6. Right click the ORB_SLAM2 project and then click generate.
    7. After success build, the libraries will be in the lib folder of the ORB_SLAM2 project source folder.

    Applications

    If you want to make apps, you can also build the mono-stero-RGBD projects provided.

    Take mono_tum app as an example, you can follow the steps below.

    1. Go to the ORBSLAM24Windows/build folder, double click the ORB_SLAM2.sln to open the peoject.
    2. Choose either debug or release mode you want. (Build mode should be the same as DBoW2 && g2o && Pangolin && ORB_SLAM2).
    3. Right click the mono_tum project and then click generate.
    4. Download tum dataset sequence, for example freiburg2_desk
    5. Right click the mono_tum project and then click Property->Config Property->Debug, input three parameters (Usage: ./mono_tum path_to_vocabulary path_to_settings path_to_sequence, the first can be ignored in windows)
    • path_to_vocabulary In ORBSLAM24Windows/Vocabulary folder, unpack the tar, a .txt file
    • path_to_settings In ORBSLAM24Windows/Examples/Monocular folder, rgbd_dataset_freiburg2_desk corresponding to TUM2.yaml
    • path_to_sequence rgbd_dataset_freiburg2_desk folder path

    Run app, it’ll take a few minutes to load the vocabulary dictionary, and then you’ll get the result.

    If you don’t satisfied with the speed of loading dictionary, you can reference issue vocabulary convert to convert the txt vocabulary to bin vocabulary, it speeds up a lot.

    The picture shows the result of mono TUM dataset.
    ORBSLAM2_monoTUM

    展开全文
  • 我用了高翔博士的ORBSLAM2_with_pointcloud_map 感觉不错,想要用orb_slam3实现三维重建。目前想做一个ORBSLAM3_with_pointcloud_map,需要寻找 如何将ORBSLAM2变成ORBSLAM2_with_pointcloud_map的 教程。</p>
  • 文章目录1、使用配置环境介绍2、安装相关库3、安装编译ORBSLAM4、运行实例 1、使用配置环境介绍 系统:ubantu18.04 ROS版本:ROS melodic 电脑戴尔i opencv: python:2.7 2、安装相关库 (1)、安装Eigen sudo apt-...

    1、配置环境介绍

    系统:ubantu18.04
    主要库版本:
    opencv: 3.2.0
    python: 2.7
    eigen3: 3.3.4

    查看库版本命令,以python为例。

    pkg-config --modversion python
    

    2、安装相关库

    (1)、安装基本插件,安装cmake、git、gcc、g++。

    sudo apt-get install cmake
    sudo apt-get install git
    sudo apt-get install gcc g++  
    

    (2)、安装pangolin
    在安装pangolin之前,先安装依赖项。

    sudo apt-get install libglew-dev
    sudo apt-get install libpython2.7-dev
    

    安装Pangolin

    git clone https://github.com/stevenlovegrove/Pangolin.git
    

    下载好后,进入Pangolin文件夹进行编译。

    cd Pangolin
    mkdir build
    cd build
    cmake ..
    

    编译安装

    sudo make
    sudo make install
    

    (3)、安装Eigen、opencv

    sudo apt-get install libeigen3-dev
    sudo apt-get install opencv3-dev
    

    3、安装编译ORBSLAM

    cd ~/catkin_ws/src/
    git clone https://github.com/raulmur/ORB_SLAM2.git ORB_SLAM2
    cd ORB_SLAM2
    chmod +x build.sh
    

    最后编译执行以下命令,此时注意运行卡死的话,将build.sh文件下的make -j 改为 make;(电脑垃圾的带不动,就像我的哈哈)

      ./build.sh
    

    (2)出现错误usleep(5000);
    解决方案:
    找到对应的System.cc文件的首部加入 头文件

    #include<unistd.h>
    

    需要增加unistd.h的文件还有:

    Examples/Monocular/mono_euroc.cc
    Examples/Monocular/mono_kitti.cc
    Examples/Monocular/mono_tum.cc

    Examples/RGB-D/rgbd_tum.cc
    Examples/Stereo/stereo_euroc.cc
    Examples/Stereo/stereo_kitti.cc

    src/LocalMapping.cc
    src/LoopClosing.cc
    src/System.cc
    src/Tracking.cc
    src/Viewer.cc

    (3)还有错误的话,把之前编译建立的builid删除,重新./build.sh,我的编译成功。
    **删除的build文件有三部分,包括ORB_SLAM2下、g2o下和DBoW2下,**也就是在build.sh建立的build文件,重新编译的话记得删除。可以进入build.sh看看文件内容。

    最终100%
    在这里插入图片描述

    4、运行实例

    (1)、以单目为例

    ./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER
    

    (2)、需要根据自己下载的数据集和自己的路径,修改一下。tumx.yaml与数据集对应freiburgx、x根据自己下载的数据集改为1、2、或者3。PATH_TO_SEQUENCE_FOLDER为自己数据集的路径,可以进入文件,选择在终端中打开,pwd命令查看,例子:

    ./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml /home/xue/Downloads/datasets/rgbd_dataset_freiburg1_xyz
    

    运行时图片:

    在这里插入图片描述
    难免有疏忽之处,如还有其他问题,欢迎交流讨论。
    其他参考资料整理,在此感谢各位文章链接的作者们。

    官网链接
    ORBSLAM论文翻译链接
    ORBSLAM2论文翻译链接
    ORBSLAM2详细代码注释链接
    ubantu18.04安装ORBSLAM2参考链接
    源码解读

    关注点赞,好运多多~

    展开全文
  • 双目相机 orbslam2 orbslam orb_slam2 双目广角相机 双目广角相机参数 orbslam用相机跑 双目相机slam 立体相机slam stereo camera slam orbslam配置 orbslam参数 orb_slam2参数 为什么写这篇文章? 纯slam小白跑...

    Keywords

    双目相机 orbslam2 orbslam orb_slam2 双目广角相机 双目广角相机参数 orbslam用相机跑 双目相机slam 立体相机slam stereo camera slam
    orbslam配置 orbslam参数 orb_slam2参数

    为什么写这篇文章?
    纯slam小白跑用自己的双目也可以轻轻松松跑起来! 大多数博文要么是针对小觅相机等高级货的,还不甚明了,要么压根就就是跑数据集的,或者压根就跑不通的,误导后来者。
    十里桃园写下此博文,愿景是会操作电脑就会跑orbslam2。

    本文内容?
    接十里桃园的上篇博文《双目相机标定和orbslam2双目参数详解》,让一个小白用很烂的相机也可以跑起来orbslam2双目。

    环境?
    非ROS系统 纯linux的ORBSLAM2,对了,就那个github上下载下来./build.sh就能编译通过的那个,地址点击 https://github.com/raulmur/ORB_SLAM2

    代码及用法
    代码在文章最后,将给的代码覆盖掉源码中( https://github.com/raulmur/ORB_SLAM2 ) stereo_euroc.cc文件中的内容,保存后重新用./build.sh命令编译。

    要用小觅相机吗?
    不需要,自己粘了俩相机就可以突突突的跑。
    在这里插入图片描述

    如何运行?
    替换之后,按官网给的步骤编译robslam2.编译完成之后敲下面的命令,十里桃园用的是linux系统,windows没测想来也差不多把,没有用ros系统:
    ./Examples/Stereo/stereo_euroc Vocabulary/ORBvoc.txt Examples/Stereo/newbot.yaml
    在这里插入图片描述

    newbot.yaml是个啥?
    参考十里的上篇博文《双目相机标定和orbslam2双目参数详解》你改好的配置文件命名为newbot.yaml,放到Examples/Stereo/ 文件夹下面

    运行结果
    运行后有十里桃园的专属计数器,觉得烦可以进代码去掉shilitaoyuan。正常运行一直刷帧数就对了
    在这里插入图片描述

    经典orbslam结果 必须要上的:
    在这里插入图片描述
    跑起来啥效果啊?
    参数正确的话,这个还是蛮稳定的可以突突突的跑(笔记本设置电源选项,最大性能不然不接电源会卡,orbslam会计算的很慢)。
    绕桌子一圈回个环,十里桃园在转弯的时侯一路小跑,没丢,赞:
    在这里插入图片描述

    说了这么多,代码在哪啊?
    在下面,换掉源码中的stereo_euroc.cc文件中的内容,保存,重新编译即可,记得调下代码中读相机的函数调成你自己的,这个我也想帮你都调好了,但是每个人的相机总有那么一点差异不是:

    /**
    * This file is part of ORB-SLAM2.
    *
    * Copyright (C) 2014-2016 Raúl Mur-Artal <raulmur at unizar dot es> (University of Zaragoza)
    * For more information see <https://github.com/raulmur/ORB_SLAM2>
    *
    * ORB-SLAM2 is free software: you can redistribute it and/or modify
    * it under the terms of the GNU General Public License as published by
    * the Free Software Foundation, either version 3 of the License, or
    * (at your option) any later version.
    *
    * ORB-SLAM2 is distributed in the hope that it will be useful,
    * but WITHOUT ANY WARRANTY; without even the implied warranty of
    * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    * GNU General Public License for more details.
    *
    * You should have received a copy of the GNU General Public License
    * along with ORB-SLAM2. If not, see <http://www.gnu.org/licenses/>.
    */
    
    
    #include<iostream>
    #include<algorithm>
    #include<fstream>
    #include<iomanip>
    #include<chrono>
    
    #include<opencv2/core/core.hpp>
    
    #include<System.h>
    using namespace std::chrono;
    using namespace std;
    
    
    
    int main(int argc, char **argv)
    {
    
    
        // Retrieve paths to images
        vector<string> vstrImageLeft;
        vector<string> vstrImageRight;
        vector<double> vTimeStamp;
        //LoadImages(string(argv[3]), string(argv[4]), string(argv[5]), vstrImageLeft, vstrImageRight, vTimeStamp);
    
        //if(vstrImageLeft.empty() || vstrImageRight.empty())
       // {
          //  cerr << "ERROR: No images in provided path." << endl;
           // return 1;
        //}
    
       // if(vstrImageLeft.size()!=vstrImageRight.size())
       // {
         //   cerr << "ERROR: Different number of left and right images." << endl;
       //     return 1;
       // }
    
        // Read rectification parameters
        cv::FileStorage fsSettings(argv[2], cv::FileStorage::READ);
        if(!fsSettings.isOpened())
        {
            cerr << "ERROR: Wrong path to settings" << endl;
            return -1;
        }
    
        cv::Mat K_l, K_r, P_l, P_r, R_l, R_r, D_l, D_r;
        fsSettings["LEFT.K"] >> K_l;
        fsSettings["RIGHT.K"] >> K_r;
    
        fsSettings["LEFT.P"] >> P_l;
        fsSettings["RIGHT.P"] >> P_r;
    
        fsSettings["LEFT.R"] >> R_l;
        fsSettings["RIGHT.R"] >> R_r;
    
        fsSettings["LEFT.D"] >> D_l;
        fsSettings["RIGHT.D"] >> D_r;
    
        int rows_l = fsSettings["LEFT.height"];
        int cols_l = fsSettings["LEFT.width"];
        int rows_r = fsSettings["RIGHT.height"];
        int cols_r = fsSettings["RIGHT.width"];
    
        if(K_l.empty() || K_r.empty() || P_l.empty() || P_r.empty() || R_l.empty() || R_r.empty() || D_l.empty() || D_r.empty() ||
                rows_l==0 || rows_r==0 || cols_l==0 || cols_r==0)
        {
            cerr << "ERROR: Calibration parameters to rectify stereo are missing!" << endl;
            return -1;
        }
    
        cv::Mat M1l,M2l,M1r,M2r;
        cv::initUndistortRectifyMap(K_l,D_l,R_l,P_l.rowRange(0,3).colRange(0,3),cv::Size(cols_l,rows_l),CV_32F,M1l,M2l);
        cv::initUndistortRectifyMap(K_r,D_r,R_r,P_r.rowRange(0,3).colRange(0,3),cv::Size(cols_r,rows_r),CV_32F,M1r,M2r);
    
    
       // const int nImages = vstrImageLeft.size();
    
        // Create SLAM system. It initializes all system threads and gets ready to process frames.
        ORB_SLAM2::System SLAM(argv[1],argv[2],ORB_SLAM2::System::STEREO,true);
    
        // Vector for tracking time statistics
        vector<float> vTimesTrack;
        cout << endl << "-------" << endl;
        cout << "Start processing camera ..." << endl;
    
      
        cv::Mat imLeft, imRight, imLeftRect, imRightRect;
    
    
    
    //***********************************************************************8
           cv::VideoCapture cap1(0);
        cap1.set(CV_CAP_PROP_FRAME_WIDTH,640);
    
        cap1.set(CV_CAP_PROP_FRAME_HEIGHT,480);
        cap1.set(CV_CAP_PROP_FPS, 30);
        cv::VideoCapture cap2(1);
        cap2.set(CV_CAP_PROP_FRAME_WIDTH,640);
    
        cap2.set(CV_CAP_PROP_FRAME_HEIGHT,480);
        cap2.set(CV_CAP_PROP_FPS, 30);
       // cv::VideoCapture cap1(0);
    
    //上面这段按你自己的相机参数修改 双目相机有倆id的 有一个id的 左右别反就行
    //***********************************************************************8
    	long int nImages = 0;
            int ni=0;
    // Main loop
        while(ni>-1)
        {
    
    //***********************************************************************8        
             cap2 >> imRight;
             cap1 >> imLeft;
     
    //***********************************************************************8
            if(imLeft.empty())
            {
                cerr << endl << "Check Left Camera!! "<< endl;
                return 1;
            }
    
            if(imRight.empty())
            {
                cerr << endl << "Check Right Camera!! "<< endl;
                return 1;
            }
    
            cv::remap(imLeft,imLeftRect,M1l,M2l,cv::INTER_LINEAR);
            cv::remap(imRight,imRightRect,M1r,M2r,cv::INTER_LINEAR);
    
            time_point<system_clock> now = system_clock::now();
            
            double tframe = now.time_since_epoch().count();
            vTimeStamp.push_back(tframe);
    
    #ifdef COMPILEDWITHC11
            std::chrono::steady_clock::time_point t1 = std::chrono::steady_clock::now();
    #else
            std::chrono::monotonic_clock::time_point t1 = std::chrono::monotonic_clock::now();
    #endif
    
            // Pass the images to the SLAM system
            SLAM.TrackStereo(imLeftRect,imRightRect,tframe);
    
    #ifdef COMPILEDWITHC11
            std::chrono::steady_clock::time_point t2 = std::chrono::steady_clock::now();
    #else
            std::chrono::monotonic_clock::time_point t2 = std::chrono::monotonic_clock::now();
    #endif
    
            double ttrack= std::chrono::duration_cast<std::chrono::duration<double> >(t2 - t1).count();
           
            vTimesTrack.push_back(ttrack);
    
            // Wait to load the next frame
    /*        
    	double T=0;
            if(ni<nImages-1)
                T = vTimeStamp[ni+1]-tframe;
            else if(ni>0)
                T = tframe-vTimeStamp[ni-1];
    
           if(ttrack<T)
                usleep((T-ttrack)*1e6);
    */
    	nImages++;
    	std::cout << "shilitaoyuan_frames: "<<nImages<< std::endl; 
        }
    
        // Stop all threads
        SLAM.Shutdown();
    
        // Tracking time statistics
        sort(vTimesTrack.begin(),vTimesTrack.end());
        float totaltime = 0;
        for(int ni=0; ni<nImages; ni++)
        {
            totaltime+=vTimesTrack[ni];
        }
        cout << "-------" << endl << endl;
        cout << "median tracking time: " << vTimesTrack[nImages/2] << endl;
        cout << "mean tracking time: " << totaltime/nImages << endl;
    
        // Save camera trajectory
        SLAM.SaveTrajectoryTUM("CameraTrajectory.txt");
    
        return 0;
    }
    
    
    
    展开全文
  • 搞完毕设之后与本科毕业还有两个月的时间,这段时间除了每天到处无聊闲逛之外,还是决定做点什么,于是就拿来网上一个现成的开源SLAM代码,看看别人是怎么将SLAM写成一个完备的工程库的所以就写了一起学ORBSLAM2这个...
  • 一起学ORBSLAM2(8)ORBSLAM的loopclosing

    千次阅读 2018-06-05 22:04:02
    ORBSLAM中的回环检测和重定位具有类似的方式,所以我们这里将两个部分放到同一篇文章中讲解。首先看tracking线程中的重定位。一.Tracking重定位重定位模式是在系统处于LOST(追踪线程失败)的情况下进行的系统挽救...
  • 一起学ORBSLAM2(11)ORBSLAM的localmapping

    千次阅读 2018-06-05 22:35:05
    转载请注明原创地址:https://blog.csdn.net/qq_30356613/article/category/6897125ORBSLAM的局部建图线程实际做的工作是来维护全局map以及管理关键帧的,对tracking得到的关键帧进行筛选融合,以及对关键帧中的地图...
  • orbslam2总结.pdf

    2019-12-19 10:24:45
    ORBSLAM2源码阅读总结,包含ORBSLAM2的前端,后端以及回环检测三部分的一些分析以及公式总结。
  • 一起学ORBSLAM2(6)ORBSLAM中的特征匹配

    千次阅读 多人点赞 2018-06-05 21:53:00
    ORBSLAM中对于特征点的匹配在不同情况下有不同的匹配方式。分为以下几种:1. 按照投影进行匹配2. 按照bow向量节点进行匹配3. 针对初始化地图点的匹配4. 针对单目三角化的匹配5. 基于相似矩阵的匹配6. 通过匹配...
  • ORBSLAM-Atlas.pdf

    2021-02-19 10:25:06
    ORBSLAM-Atlas.pdf
  • ORBSLAM是一种基于优化方法的SLAM方法,与之前的基于滤波器方法有很大的不同,工程中引入了第三方库g2o,g2o是基于图优化的优化算法库。首先了解什么是图优化,图优化是将普通的优化问题用图的方式(变量用节点表示,...
  • ORBSLAM Map points, KeyFrames and their Selection, Covisibility Graph and Essential Graph, Bags of Words Place Recognition
  • PNP问题:已知空间中存在...ORBSLAM中采用EPNP的方案对解决PNP问题,EPNP将匹配点的当前相机坐标系下的坐标得到,然后根据ICP算法进一步求解得到相机的位姿Tcw。整体框架使用RANSAC的方式不断进行迭代求解 因此ORBS...
  • 高翔博士对ORBSLAM2修改增加了点云模块,https://github.com/gaoxiang12/ORBSLAM2_with_pointcloud_map 在编译过程中,遇到很...1.下载github源码,找到orbslam2_modified.zip,解压得到g2o_with_orbslam2和ORB_SLA...
  • ORBSLAM2:https://github.com/raulmur/ORB_SLAM2 LearnVIORB:https://github.com/jingpang/LearnVIORB 1、下载编译: cd catkin_ws/src/ git clone https://github.com/raulmur/ORB_SLAM2 cd ORB_SLAM2/ ./build...
  • orb slam pose correction

    2020-12-08 20:06:45
    So I was thinking to use orb slam to correct the pose. What is the best practice for that? I was thinking to correct my odometry message from the wheel encoders every time the loop closure is being ...
  • 写在前面本文基于ORBSlam2的Monocular模式撰写,所涉及的算法都是单目视觉下的算法。
  • 视觉SLAM总结——ORB SLAM2中关键知识点总结

    千次阅读 多人点赞 2019-06-04 21:25:49
    ORB SLAM2中关键知识点总结ORB SLAM2中关键知识点总结1. ORB SLAM2的总体框架是怎样的?2. ORB SLAM2是怎样完成初始化的?3. ORB SLAM2是如何进行Tracking的?4. ORB SLAM2是如何选取关键帧的?5. ORB SLAM2中有那些...
  • ORBSLAM3_docker-源码

    2021-04-16 16:05:01
    ORBSLAM3_docker 构建ORBSLAM3时遇到错误。 在安装ROS之前,我能够修复该错误,并以某种方式完成了相关性。 来自原始代码 安装前要求 (1)码头工人 (2)NVIDIA泊坞窗 使用dockerfile安装ORBSLAM3 运行build_...
  • orbslam2_ros-源码

    2021-03-20 04:58:14
    orbslam2_ros 使用raulmur / ORB_SLAM2的分支,并在ORB_SLAM2/build/ , ORB_SLAM2/ThirdParty/DBoW2/build/和ORB_SLAM2/ThirdParty/g2o/build执行make install 。 另外,通过向.bashrc添加代码来设置环境变量。 ...
  • 视觉里程计SLAM视频链接+ORBSLAM讲解
  • Ubuntu16.04 运行ORBSLAM3

    千次阅读 热门讨论 2020-07-24 22:54:16
    Ubuntu16.04 运行ORBSLAM3@TOC 代码地址https://github.com/UZ-SLAMLab/ORB_SLAM3 文章链接 ORBSLAM3 ORBSLAM3在之前的基于单目,双目,RGBD后又集成了Visual-Inertial and Multi-Map SLAM,而且同时支持pin-hole和...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 9,382
精华内容 3,752
关键字:

orbslam