精华内容
下载资源
问答
  • 使用RGBD数据集进行点云绘制-附件资源
  • sunrgbd数据集

    千次阅读 2019-07-10 22:23:02
    1 如何将NYUv2的点类标注转化为boundingbox的形式,SUNRGBD数据如何读取成voc形式? 因为用像素点标记的label...sunrgbd数据集,想转为VOC格式 SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite Abs...

    1 如何将NYUv2的点类标注转化为bounding box的形式,SUNRGBD数据如何读取成voc形式?

    因为用像素点标记的label数据我不知道如何将其转化为yolov3,fasterRCNN等能够训练的VOC格式

     

    sunrgbd数据集,想转为VOC格式

     

    SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite

    Abstract

    Although RGB-D sensors have enabled major breakthroughs for several vision tasks, such as 3D reconstruction, we haven not achieved a similar performance jump for high-level scene understanding. Perhaps one of the main reasons for this is the lack of a benchmark of reasonable size with 3D annotations for training and 3D metrics for evaluation. In this paper, we present an RGB-D benchmark suite for the goal of advancing the state-of-the-art in all major scene understanding tasks. Our dataset is captured by four different sensors and contains 10,000 RGB-D images, at a similar scale as PASCAL VOC. The whole dataset is densely annotated and includes 146,617 2D polygons and 58,657 3D bounding boxes with accurate object orientations, as well as a 3D room layout and category for scenes. This dataset enables us to train data-hungry algorithms for scene-understanding tasks, evaluate them using direct and meaningful 3D metrics, avoid overfitting to a small testing set, and study cross-sensor bias.

    News

    SUNRGB-D 3D Object Detection Challenge (2017): data and delevopment toolkit is now available here 

     

    From: http://rgbd.cs.princeton.edu/?from=singlemessage&isappinstalled=0

    做RGBD的目标检测,遇到数据集读取的问题,请问如何将NYUv2的点类标注转化为bounding box的形式...

     

    #1 生成trainval.txt和test.txt
    
    load allsplit.mat
    
    fid=fopen('trainval.txt','wt')
    [row,col]=size(alltrain)
    c=alltrain
    str1='/n/fs/sun3d/data/'
    str2='/home/zhaohuaqing/Downloads/'
    f='/image/'
    f1='image/'
    kv2ok='rgbf'
    nyu='NYU'
    b3dodata='b3dodata'
    sun3ddata='sun3ddata'
    realsense='realsense'
    xtion_align_data='xtion_align_data'
    align_kv2='align_kv2'
    for i=1:1:row
        for j=1:1:col
            a=c{i,j}
            if a(end)=='/'
                aa=strcat(a,f1)
            else
                aa=strcat(a,f)
            end
            d=strrep(aa,str1,str2)
            w=findstr(d,kv2ok)
            w1=findstr(d,nyu)
            w2=findstr(d,b3dodata)
            w3=findstr(d,sun3ddata)
            w4=findstr(d,xtion_align_data)
            w5=findstr(d,realsense)
            w6=findstr(d,align_kv2)
            if w
                d=strcat(d,'0')
                v=d(w+4:w+9)
                d=strcat(d,v)
            end
             if w1
                v1=d(w1(2):w1(2)+6)
                d=strcat(d,v1)
             end
             if w2
                v2=d(w2+9:w2+16)
                d=strcat(d,v2)
             end
             if w3
                v2=d(end-26:end-7)
                d=strcat(d,v2)
             end
        
             if w4
                list=dir(d)
                k1=length(list)
                for n=1:k1
                    if list(n).name(end)=='g'
                        na=list(n).name(1:end-4)
                    end
                end
                d=strcat(d,na)
             end
             if w5
                list=dir(d)
                k1=length(list)
                for n=1:k1
                    if list(n).name(end)=='g'
                        na1=list(n).name(1:end-4)
                    end
                end
                d=strcat(d,na1)
             end
             if w6
                list=dir(d)
                k1=length(list)
                for n=1:k1
                    if list(n).name(end)=='g'
                        na2=list(n).name(1:end-4)
                    end
                end
                d=strcat(d,na2)
             end
            
            fprintf(fid,'%s\n',d)
        end
    end
    
    fclose(fid)
    
    #2 生成val.txt和train.txt
    
    load allsplit.mat
    c=trainvalsplit
    d=c.val
    fid=fopen('val.txt','wt')
    [row,col]=size(d)
    str1='/n/fs/sun3d/data/'
    str2='/home/zhaohuaqing/Downloads/'
    f='image/'
    for i=1:1:row
        for j=1:1:col
            a=d{i,j}
            aa=strcat(a,f)
            e=strrep(aa,str1,str2)
            fprintf(fid,'%s\n',e)
        end
    end
    fclose(fid)
    
    
    load allsplit.mat
    o1=trainvalsplit
    o2=o1.val
    fid=fopen('val.txt','wt')
    [row,col]=size(o2)
    str1='/n/fs/sun3d/data/'
    str2='/home/zhaohuaqing/Downloads/'
    f='/image/'
    f1='image/'
    kv2ok='rgbf'
    nyu='NYU'
    b3dodata='b3dodata'
    sun3ddata='sun3ddata'
    realsense='realsense'
    xtion_align_data='xtion_align_data'
    align_kv2='align_kv2'
    for i=1:1:row
        for j=1:1:col
            a=o2{i,j}
            if a(end)=='/'
                aa=strcat(a,f1)
            else
                aa=strcat(a,f)
            end
            d=strrep(aa,str1,str2)
            w=findstr(d,kv2ok)
            w1=findstr(d,nyu)
            w2=findstr(d,b3dodata)
            w3=findstr(d,sun3ddata)
            w4=findstr(d,xtion_align_data)
            w5=findstr(d,realsense)
            w6=findstr(d,align_kv2)
            if w
                d=strcat(d,'0')
                v=d(w+4:w+9)
                d=strcat(d,v)
            end
             if w1
                v1=d(w1(2):w1(2)+6)
                d=strcat(d,v1)
             end
             if w2
                v2=d(w2+9:w2+16)
                d=strcat(d,v2)
             end
             if w3
                v2=d(end-26:end-7)
                d=strcat(d,v2)
             end
        
             if w4
                list=dir(d)
                k1=length(list)
                for n=1:k1
                    if list(n).name(end)=='g'
                        na=list(n).name(1:end-4)
                    end
                end
                d=strcat(d,na)
             end
             if w5
                list=dir(d)
                k1=length(list)
                for n=1:k1
                    if list(n).name(end)=='g'
                        na1=list(n).name(1:end-4)
                    end
                end
                d=strcat(d,na1)
             end
             if w6
                list=dir(d)
                k1=length(list)
                for n=1:k1
                    if list(n).name(end)=='g'
                        na2=list(n).name(1:end-4)
                    end
                end
                d=strcat(d,na2)
             end
            
            fprintf(fid,'%s\n',d)
        end
    end
    
    fclose(fid)

    From:https://blog.csdn.net/weixin_38378417/article/details/79449894 
     

     

    展开全文
  • TUM rgbd数据集的associate问题 标签(空格分隔): 旭 SLAM 转载 TUM数据集下载地址 TUM的rgbd数据集数据集是结构如下: 但是深度传感器和相机获得数据的时间是不一致的,需要进行对齐.ROS的话可以对深度图进行注册...

    TUM rgbd数据集的associate问题

    标签(空格分隔): 旭 SLAM


    转载
    TUM数据集下载地址
    TUM的rgbd数据集数据集是结构如下:

    但是深度传感器和相机获得数据的时间是不一致的,需要进行对齐.ROS的话可以对深度图进行注册,直接获得匹配的图像.数据集则需要一个python文件进行associate.

    新建一个python脚本associate.py,其实tum官网上可以直接下载的,因为有的时候网不好,就直接黏贴下来,下次使用也方便.

    #!/usr/bin/python
    # Software License Agreement (BSD License)
    #
    # Copyright (c) 2013, Juergen Sturm, TUM
    # All rights reserved.
    #
    # Redistribution and use in source and binary forms, with or without
    # modification, are permitted provided that the following conditions
    # are met:
    #
    #  * Redistributions of source code must retain the above copyright
    #    notice, this list of conditions and the following disclaimer.
    #  * Redistributions in binary form must reproduce the above
    #    copyright notice, this list of conditions and the following
    #    disclaimer in the documentation and/or other materials provided
    #    with the distribution.
    #  * Neither the name of TUM nor the names of its
    #    contributors may be used to endorse or promote products derived
    #    from this software without specific prior written permission.
    #
    # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
    # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
    # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
    # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
    # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
    # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
    # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
    # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
    # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
    # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
    # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
    # POSSIBILITY OF SUCH DAMAGE.
    #
    # Requirements: 
    # sudo apt-get install python-argparse
    
    """
    The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.
    
    For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
    """
    
    import argparse
    import sys
    import os
    import numpy
    
    
    def read_file_list(filename):
        """
        Reads a trajectory from a text file. 
    
        File format:
        The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
        and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. 
    
        Input:
        filename -- File name
    
        Output:
        dict -- dictionary of (stamp,data) tuples
    
        """
        file = open(filename)
        data = file.read()
        lines = data.replace(","," ").replace("\t"," ").split("\n") 
        list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]
        list = [(float(l[0]),l[1:]) for l in list if len(l)>1]
        return dict(list)
    
    def associate(first_list, second_list,offset,max_difference):
        """
        Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim 
        to find the closest match for every input tuple.
    
        Input:
        first_list -- first dictionary of (stamp,data) tuples
        second_list -- second dictionary of (stamp,data) tuples
        offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
        max_difference -- search radius for candidate generation
    
        Output:
        matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))
    
        """
        first_keys = first_list.keys()
        second_keys = second_list.keys()
        potential_matches = [(abs(a - (b + offset)), a, b) 
                             for a in first_keys 
                             for b in second_keys 
                             if abs(a - (b + offset)) < max_difference]
        potential_matches.sort()
        matches = []
        for diff, a, b in potential_matches:
            if a in first_keys and b in second_keys:
                first_keys.remove(a)
                second_keys.remove(b)
                matches.append((a, b))
    
        matches.sort()
        return matches
    
    if __name__ == '__main__':
    
        # parse command line
        parser = argparse.ArgumentParser(description='''
        This script takes two data files with timestamps and associates them   
        ''')
        parser.add_argument('first_file', help='first text file (format: timestamp data)')
        parser.add_argument('second_file', help='second text file (format: timestamp data)')
        parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
        parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)
        parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)
        args = parser.parse_args()
    
        first_list = read_file_list(args.first_file)
        second_list = read_file_list(args.second_file)
    
        matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    
    
        if args.first_only:
            for a,b in matches:
                print("%f %s"%(a," ".join(first_list[a])))
        else:
            for a,b in matches:
                print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))
    

    使用方法:

    python  associate.py ./rgbd_dataset_freiburg1_room/rgb.txt ./rgbd_dataset_freiburg1_room/depth.txt > associate.txt
    
    展开全文
  • 关于TUM rgbd数据集的associate问题

    千次阅读 2018-05-30 15:26:31
    之前下载过TUM的rgbd数据集应该知道对应一个数据集是如下的数据结构. 但是深度传感器和相机获得数据的时间是不一致的,需要进行对齐.ROS的话可以对深度图进行注册,直接获得匹配的图像.数据集则需要一个python文件...

    之前下载过TUM的rgbd数据集应该知道对应一个数据集是如下的数据结构.

    这里写图片描述

    但是深度传感器和相机获得数据的时间是不一致的,需要进行对齐.ROS的话可以对深度图进行注册,直接获得匹配的图像.数据集则需要一个python文件进行associate.

    新建一个python脚本associate.py,其实tum官网上可以直接下载的,因为有的时候网不好,就直接黏贴下来,下次使用也方便.

    #!/usr/bin/python
    # Software License Agreement (BSD License)
    #
    # Copyright (c) 2013, Juergen Sturm, TUM
    # All rights reserved.
    #
    # Redistribution and use in source and binary forms, with or without
    # modification, are permitted provided that the following conditions
    # are met:
    #
    #  * Redistributions of source code must retain the above copyright
    #    notice, this list of conditions and the following disclaimer.
    #  * Redistributions in binary form must reproduce the above
    #    copyright notice, this list of conditions and the following
    #    disclaimer in the documentation and/or other materials provided
    #    with the distribution.
    #  * Neither the name of TUM nor the names of its
    #    contributors may be used to endorse or promote products derived
    #    from this software without specific prior written permission.
    #
    # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
    # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
    # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
    # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
    # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
    # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
    # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
    # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
    # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
    # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
    # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
    # POSSIBILITY OF SUCH DAMAGE.
    #
    # Requirements: 
    # sudo apt-get install python-argparse
    
    """
    The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.
    
    For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
    """
    
    import argparse
    import sys
    import os
    import numpy
    
    
    def read_file_list(filename):
        """
        Reads a trajectory from a text file. 
    
        File format:
        The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
        and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. 
    
        Input:
        filename -- File name
    
        Output:
        dict -- dictionary of (stamp,data) tuples
    
        """
        file = open(filename)
        data = file.read()
        lines = data.replace(","," ").replace("\t"," ").split("\n") 
        list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]
        list = [(float(l[0]),l[1:]) for l in list if len(l)>1]
        return dict(list)
    
    def associate(first_list, second_list,offset,max_difference):
        """
        Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim 
        to find the closest match for every input tuple.
    
        Input:
        first_list -- first dictionary of (stamp,data) tuples
        second_list -- second dictionary of (stamp,data) tuples
        offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
        max_difference -- search radius for candidate generation
    
        Output:
        matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))
    
        """
        first_keys = first_list.keys()
        second_keys = second_list.keys()
        potential_matches = [(abs(a - (b + offset)), a, b) 
                             for a in first_keys 
                             for b in second_keys 
                             if abs(a - (b + offset)) < max_difference]
        potential_matches.sort()
        matches = []
        for diff, a, b in potential_matches:
            if a in first_keys and b in second_keys:
                first_keys.remove(a)
                second_keys.remove(b)
                matches.append((a, b))
    
        matches.sort()
        return matches
    
    if __name__ == '__main__':
    
        # parse command line
        parser = argparse.ArgumentParser(description='''
        This script takes two data files with timestamps and associates them   
        ''')
        parser.add_argument('first_file', help='first text file (format: timestamp data)')
        parser.add_argument('second_file', help='second text file (format: timestamp data)')
        parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
        parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)
        parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)
        args = parser.parse_args()
    
        first_list = read_file_list(args.first_file)
        second_list = read_file_list(args.second_file)
    
        matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    
    
        if args.first_only:
            for a,b in matches:
                print("%f %s"%(a," ".join(first_list[a])))
        else:
            for a,b in matches:
                print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))
    
    

    使用方法

    python  associate.py ./rgbd_dataset_freiburg1_room/rgb.txt ./rgbd_dataset_freiburg1_room/depth.txt > fr1_room.txt
    

    简单记录下…比IMU时间戳对齐简单多了^_^

    展开全文
  • 一、 TUM-RGBD数据集 数据集链接:https://vision.in.tum.de/data/datasets/rgbd-dataset/download 相关介绍知乎链接:https://zhuanlan.zhihu.com/p/47124028 groundtruth格式:时间戳,x, y, z, qx, qy, qz, qw ...

    一、 TUM-RGBD数据集

    二、RGB图片和Depth图像对齐

    • 工具:官网上的associate.py工具
    • 链接:见官网
    • associate.py代码:
    #!/usr/bin/python
    # Software License Agreement (BSD License)
    #
    # Copyright (c) 2013, Juergen Sturm, TUM
    # All rights reserved.
    #
    # Redistribution and use in source and binary forms, with or without
    # modification, are permitted provided that the following conditions
    # are met:
    #
    #  * Redistributions of source code must retain the above copyright
    #    notice, this list of conditions and the following disclaimer.
    #  * Redistributions in binary form must reproduce the above
    #    copyright notice, this list of conditions and the following
    #    disclaimer in the documentation and/or other materials provided
    #    with the distribution.
    #  * Neither the name of TUM nor the names of its
    #    contributors may be used to endorse or promote products derived
    #    from this software without specific prior written permission.
    #
    # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
    # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
    # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
    # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
    # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
    # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
    # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
    # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
    # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
    # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
    # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
    # POSSIBILITY OF SUCH DAMAGE.
    #
    # Requirements: 
    # sudo apt-get install python-argparse
    
    """
    The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.
    
    For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
    """
    
    import argparse
    import sys
    import os
    import numpy
    
    
    def read_file_list(filename):
        """
        Reads a trajectory from a text file. 
    
        File format:
        The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
        and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. 
    
        Input:
        filename -- File name
    
        Output:
        dict -- dictionary of (stamp,data) tuples
    
        """
        file = open(filename)
        data = file.read()
        lines = data.replace(","," ").replace("\t"," ").split("\n") 
        list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]
        list = [(float(l[0]),l[1:]) for l in list if len(l)>1]
        return dict(list)
    
    def associate(first_list, second_list,offset,max_difference):
        """
        Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim 
        to find the closest match for every input tuple.
    
        Input:
        first_list -- first dictionary of (stamp,data) tuples
        second_list -- second dictionary of (stamp,data) tuples
        offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
        max_difference -- search radius for candidate generation
    
        Output:
        matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))
    
        """
        first_keys = first_list.keys()
        second_keys = second_list.keys()
        potential_matches = [(abs(a - (b + offset)), a, b) 
                             for a in first_keys 
                             for b in second_keys 
                             if abs(a - (b + offset)) < max_difference]
        potential_matches.sort()
        matches = []
        for diff, a, b in potential_matches:
            if a in first_keys and b in second_keys:
                first_keys.remove(a)
                second_keys.remove(b)
                matches.append((a, b))
    
        matches.sort()
        return matches
    
    if __name__ == '__main__':
    
        # parse command line
        parser = argparse.ArgumentParser(description='''
        This script takes two data files with timestamps and associates them   
        ''')
        parser.add_argument('first_file', help='first text file (format: timestamp data)')
        parser.add_argument('second_file', help='second text file (format: timestamp data)')
        parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
        parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)
        parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)
        args = parser.parse_args()
    
        first_list = read_file_list(args.first_file)
        second_list = read_file_list(args.second_file)
    
        matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    
    
        if args.first_only:
            for a,b in matches:
                print("%f %s"%(a," ".join(first_list[a])))
        else:
            for a,b in matches:
                print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))
    • 对齐方法:
      (指令中rgb和depth顺序即为输出的associate.txt中的排列顺序)

      python associate.py rgb.txt depth.txt > associate.txt
      
    • :从官网上下载的associate.pyz在第一次使用可能会失败,如:AttributeError: ‘dict_keys’ object has no attribute ‘remove’,需要稍微修改其中的代码。
      即:
      需要将associate.py中第86行87行的

      first_keys = first_list.keys()
      second_keys = second_list.keys()
      改为

      first_keys = list(first_list.keys())
      second_keys = list(second_list.keys())
      参考链接:AttributeError: ‘dict_keys’ object has no attribute ‘remove’

      三、RGB图片和groundtruth对齐

    • 对齐:

      python associate.py rgb.txt groundtruth.txt > associate1.txt
      

    然后文件里边就有了对应的rgb图像和对应的groundtruth值在同一行。

    • 从associate.txt中单独取出对应的groundtruth.txt
    import codecs
    import os
    import linecache
    
    f = codecs.open('E:/test/rgbd_dataset_freiburg2_360_kidnap/associate.txt', mode='r', encoding='utf-8')  # 打开txt文件,以‘utf-8’编码读取
    dirName1 = "E:/test/rgbd_dataset_freiburg2_360_kidnap/rgb/"
    
    list_img_name = []
    list_read_img = []
     
    def delete_end_str(path):
        for i in f:
            a=i.strip()
            b=a[44:]
            print('float(b):',b)
            print('下一步:保存')
            file=open('gt.txt', mode='a')
            file.write(b)
            file.write("\n")
            file.close()
    delete_end_str(dirName1)
    展开全文
  • 主流RGBD数据集简介 2019.12.15

    千次阅读 2019-12-15 21:50:45
    RGBD 数据集简介,2019.12.15 NYU Depth Dataset V2(3D分割任务) [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6Yh3fX5t-1576417782628)(C:\Users\lenovo\AppData\Roaming\Typora\...
  • TUM数据集RGBD)百度云下载链接,官网下载实在太慢了,于是我就下载了一个数据集传到了百度云供大家下载,赚一丁点积分下载其他文档
  • 慕尼黑工大TUM的数据集,因为在外网不好下载,所以我特地下载了下来,分享在我的博客上面,希望可以帮助到大家。
  • 使用RGBD数据集进行点云绘制

    千次阅读 2019-01-19 22:06:53
    本文接上文视觉SLAM14讲使用第八章深度图片进行点云图绘制 ...下载数据集为:https://vision.in.tum.de/data/datasets/rgbd-dataset/download#freiburg1_xyz 程序与其他东西下载:链接: https://pan...
  • 低情商 直接下载,几Kb的网速,要下一百多天。 高情商 edge浏览器+迅雷浏览器扩展插件,一晚上就下载好了。
  • Download: Project page Kinect data from the real world RGBD Scenes dataset Introduced: ICRA 2011 Device: Kinect v1 Description: Real indoor scenes, featuring objects from the RGBD object dataset '...
  • 地址:http://rgbd.cs.princeton.edu/ 简介: 虽然RGB-D传感器已经在一些视觉任务上实现了重大突破,比如3D重建,但我们还...我们的数据集由四个不同的传感器捕获,包含10,000张RGB-D图像,其规模与PASCAL VOC类似。整.
  • 数据集4.1 NTU RGBD4.1.1 下载方式4.1.2 Benchmark5. 相关论文5.1 Skeleton-based Action Recognition with Convolution Neural Networks(2017.8 海康威视)5.1.1 论文亮点5.2 Co-ocurrence Feature Learning from ...
  • 在一切准备工作都做好后,运行TUM RGBD的任意一个序列时,发现在视频序列结束时,窗口就会灰屏卡住。通过debug发现是pangolin库的一个函数卡住了,似乎是该库新版本的一个BUG,但我们只需注释掉该行代码即可。 具体...
  • ORB_SLAM2跑TUM——单目和RGBD数据集

    千次阅读 2019-04-18 11:09:01
    2019年4月18日,昨天实验了跑ORB-SLAM2的单目,今天早上跑的RGB-D,不是在ROS下跑的(这个还需要探索探索)。 环境和数据准备 Ubuntu 16.04 ...2.TUM 数据集准备+单目运行 从这个网址下载tum数据...
  • 生成有groundtruth标签的txt文件load SUNRGBDMeta2DBB_v2.matQ=SUNRGBDMeta2DBBlength1=length(Q)fid=fopen('output2.txt','wt')str1='/n/fs/sun3d/data/'str2='/home/zhaohuaqing/Downloads/'for n1=1:length1 ...
  • 1. Kinect2 + Ubuntu 首先,在路径catkin_ws/src/iai_kinect2/下clone开源代码 git ... ...注意把文件夹dataset_make移到了顶层 删掉了它顶层的文件夹 如图: ...按照自己的要求修改源码 这是...制作自己的rgb-d数据集

空空如也

空空如也

1 2 3 4 5 ... 16
收藏数 307
精华内容 122
关键字:

rgbd数据集