精华内容
下载资源
问答
  • Raw data

    2020-12-27 15:00:46
    <p>For my master studies I have to work with raw minION data. However, in the fast5 files I can not find the raw data due to the fact that this data is already basecalled. Since you work with raw data...
  • rawdata-converter-bom Rawdata Converter模块的物料清单
  • <p>Would you consider an <strong>option</strong> to store raw data in a <code>rawdata</code> subfolder instead of the root folder?</p><p>该提问来源于开源项目:bids-standard/bids-specification...
  • 原始数据转换器 Rawdata Converter应用程序构建块。 前往获取概述文档。
  • Raw data required

    2021-01-07 03:17:48
    <div><p>My client want raw data, so how can i get the raw data in my mysql database </p><p>该提问来源于开源项目:traccar/traccar</p></div>
  • Rawdata Converter应用程序-Kostra 转换Kostra数据 制定目标 您可以使用make执行常见任务: build Build all and create docker image build-mvn Build project and install to you local maven repo build-docker...
  • Getting raw data

    2020-12-08 20:22:26
    <p>It would be nice to have an option to be able to get not only the visible data but the underlying raw data as well. <p>Like I have a numeric field, the visible content is: 0.1118 however the raw, ...
  • 读取rawData图像

    2017-11-08 23:13:17
    读取rawData图像,其中实现了raw data图像转BMP图像。主要使用的opencv库
  • Send Raw Data

    2020-12-09 03:48:52
    ve felt was missing from the PICTOR experience was the ability to have the raw data. It's nice to quickly see the results of an observation, but it's hard to do precise analysis of the data. ...
  • Downloading raw data

    2020-12-27 05:42:01
    Is there a way to download raw alma data alongside the products? If i use the download_and_extract_files method with include_asdm = True and delete = False, there is no raw data being ...
  • Rawdata Converter项目 便利了在通用上下文中使用多个独立Rawdata Converter存储库的开发环境。 还提供了一个本地运行时环境,该环境将不同的服务联系在一起,形成一个完整的工作示例沙箱。 入门 从git检索所有相关...
  • Getting Raw data

    2020-12-07 08:03:20
    <p>I am trying to get raw data(phase images) from kinectv2 using libfreenect2. I tried to create a DumpPacketPipeline and attached it the device. I have also set Ir and Depth frame listeners. When I ...
  • MOSEI raw data

    2021-01-01 05:02:03
     I have downloaded the raw data of CMU-MOSEI, but there is no audio Segment data and label for each segment. What is the reason?</p><p>该提问来源于开源项目:A2Zadeh/CMU-MultimodalSDK</p></div>
  • Raw data editor

    2020-12-08 19:00:22
    <p>How about adding Raw Data Editor (later on abbr. as RDE)? The RDE will have layout similar to an ordinary hex editor, with ability to add/remove bytes at the very end (+/- buttons). Why the ...
  • Output raw data

    2020-12-09 04:45:10
    Could there be an option to output raw data used to generate these plots. This way, users can create/ combine multiple plots using other statistical tools and explore the data a little more. I would ...
  • 可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):... They are simply raw data files. I can open them with Audacity using File->Import->Raw data with e...

    可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):

    问题:

    I have these files with the extension ".adc". They are simply raw data files. I can open them with Audacity using File->Import->Raw data with encoding "Signed 16 bit" and sample rate "16000 Khz".

    I would like to do the same with python. I think that audioop module is what I need, but I can't seem to find examples on how to use it for something that simple.

    The main goal is to open the file and play a certain location in the file, for example from the second 10 to the second 20. Is there something out there for my task ?

    Thanx in advance.

    回答1:

    For opening the file, you just need file(). For finding a location, you don't need audioop: you just need to convert seconds to bytes and get the required bytes of the file. For instance, if your file is 16 kHz 16bit mono, each second is 32,000 bytes of data. So the 10th second is 320kB into the file. Just seek to the appropriate place in the file and then read the appropriate number of bytes.

    And audioop can't help you with the hardest part: namely, playing the audio. The correct way to do this very much depends on your OS.

    EDIT: Sorry, I just noticed your username is "thelinuxer". Consider pyAO for playing audio from Python on Linux. You will probably need to change the sample format to play the audio---audioop will help you with this (see ratecv, tomono/tostereo, lin2lin, and bias)

    回答2:

    Thanx a lot I was able to do the following: def play_data(filename, first_sec, second_sec): import ao from ao import AudioDevice dev = AudioDevice(2, bits=16, rate=16000,channels=1) f = open(filename, 'r') data_len = (second_sec-first_sec)*32000 f.seek(32000*first_sec) data = f.read(data_len) dev.play(data) f.close() play_data('AR001_3.adc', 2.5, 5)

    回答3:

    You can use PySoundFile to open the file as a NumPy array and play it with python-sounddevice. import soundfile as sf import sounddevice as sd sig, fs = sf.read('myfile.adc', channels=2, samplerate=16000, format='RAW', subtype='PCM_16') sd.play(sig, fs)

    You can use indexing on the NumPy array to select a certain part of the audio data.

    展开全文
  • Retrieve image raw data

    2020-11-24 13:58:03
    m currently using the MJPG format, but I also need the raw data in order to compute the luminance, is exists a method or something to get the raw data from the image ?</p><p>该提问来源于开源项目ÿ...
  • Different raw data size

    2020-12-25 22:55:25
    But when I downloaded rawdata using fastq-dump, it is 718Mb. <img alt="image" src="https://user-images.githubusercontent.com/52641420/90090825-f3cab080-dd5f-11ea-9d00-c49a2b1288ee.png" /></p> <p>It is...
  • Raw data as JSON

    2020-11-21 15:48:52
    <div><p>It would be nice if the raw data could be formated as a correct and complete JSON document instead of raw partial Javascript array definition. This would make the data easier to parse from ...
  • Synthetic Raw Data Generator

    2020-11-29 22:36:14
    <p>From my understanding I believe there might be some legal issues, if mlfinlab team were to share the actual raw data. <p>In that case, would the team be keen on using synthetic raw data. To ...
  • KITTI RAW DATA

    2020-10-13 10:36:13
    ##########################...THE KITTI VISION BENCHMARK SUITE: RAW DATA RECORDINGS Andreas Geiger Philip Lenz Raquel Urtasun Karlsruhe Institute of Technology Toyota Technological Institute at Chicago ww

    ###########################################################################

    THE KITTI VISION BENCHMARK SUITE: RAW DATA RECORDINGS

    Andreas Geiger Philip Lenz Raquel Urtasun

    Karlsruhe Institute of Technology

    Toyota Technological Institute at Chicago

    www.cvlibs.net

    ###########################################################################

    This file gives more information about the KITTI raw data recordings.

    General information about streams and timestamps

    Each sensor stream is stored in a single folder. The main folder contains
    meta information and a timestamp file, listing the timestamp of each frame
    of the sequence to nanosecond precision. Numbers in the data stream correspond
    to each numbers in each other data stream and to line numbers in the
    timestamp file (0-based index), as all data has been synchronized. All
    cameras have been triggered directly by the Velodyne laser scanner, while
    from the GPS/IMU system (recording at 100 Hz), we have taken the data
    information closest to the respective reference frame. For all sequences
    ‘image_00’ has been used as the synchronization reference stream.

    Rectified color + grayscale stereo sequences

    Our vehicle has been equipped with four cameras: 1 color camera stereo pair
    and 1 grayscale camera stereo pair. The color and grayscale cameras are
    mounted close to each other (~6 cm), the baseline of both stereo rigs is
    approximately 54 cm. We have chosen this setup such that for the left and
    right camera we can provide both color and grayscale information. While the
    color cameras (obviously) come with color information, the grayscale camera
    images have higher contrast and a little bit less noise.

    All cameras are synchronized at about 10 Hz with respect to the Velodyne
    laser scanner. The trigger is mounted such that camera images coincide
    roughly with the Velodyne lasers facing forward (in driving direction).

    All camera images are provided as lossless compressed and rectified png
    sequences. The native image resolution is 1382x512 pixels and a little bit
    less after rectification, for details see the calibration section below.
    The opening angle of the cameras (left-right) is approximately 90 degrees.

    The camera images are stored in the following directories:

    • ‘image_00’: left rectified grayscale image sequence
    • ‘image_01’: right rectified grayscale image sequence
    • ‘image_02’: left rectified color image sequence
    • ‘image_03’: right rectified color image sequence

    Velodyne 3D laser scan data

    The velodyne point clouds are stored in the folder ‘velodyne_points’. To
    save space, all scans have been stored as Nx4 float matrix into a binary
    file using the following code:

    stream = fopen (dst_file.c_str(),“wb”);
    fwrite(data,sizeof(float),4*num,stream);
    fclose(stream);

    Here, data contains 4*num values, where the first 3 values correspond to
    x,y and z, and the last value is the reflectance information. All scans
    are stored row-aligned, meaning that the first 4 values correspond to the
    first measurement. Since each scan might potentially have a different
    number of points, this must be determined from the file size when reading
    the file, where 1e6 is a good enough upper bound on the number of values:

    // allocate 4 MB buffer (only ~13044 KB are needed)
    int32_t num = 1000000;
    float data = (float)malloc(num*sizeof(float));

    // pointers
    float *px = data+0;
    float *py = data+1;
    float *pz = data+2;
    float *pr = data+3;

    // load point cloud
    FILE *stream;
    stream = fopen (currFilenameBinary.c_str(),“rb”);
    num = fread(data,sizeof(float),num,stream)/4;
    for (int32_t i=0; i<num; i++) {
    point_cloud.points.push_back(tPoint(*px,*py,*pz,*pr));
    px+=4; py+=4; pz+=4; pr+=4;
    }
    fclose(stream);

    x,y and y are stored in metric (m) Velodyne coordinates.

    IMPORTANT NOTE: Note that the velodyne scanner takes depth measurements
    continuously while rotating around its vertical axis (in contrast to the cameras,
    which are triggered at a certain point in time). This means that when computing
    point clouds you have to ‘untwist’ the points linearly with respect to the velo-
    dyne scanner location at the beginning and the end of the 360° sweep. The time-
    stamps for the beginning and the end of the sweeps can be found in the time-
    stamps file. The velodyne rotates in counter-clockwise direction.

    Of course this ‘untwisting’ only works for non-dynamic environments.

    The relationship between the camera triggers and the velodyne is the following:
    We trigger the cameras when the velodyne is looking exactly forward (into the
    direction of the cameras).

    GPS/IMU 3D localization unit

    The GPS/IMU information is given in a single small text file which is
    written for each synchronized frame. Each text file contains 30 values
    which are:

    • lat: latitude of the oxts-unit (deg)
    • lon: longitude of the oxts-unit (deg)
    • alt: altitude of the oxts-unit (m)
    • roll: roll angle (rad), 0 = level, positive = left side up (-pi…pi)
    • pitch: pitch angle (rad), 0 = level, positive = front down (-pi/2…pi/2)
    • yaw: heading (rad), 0 = east, positive = counter clockwise (-pi…pi)
    • vn: velocity towards north (m/s)
    • ve: velocity towards east (m/s)
    • vf: forward velocity, i.e. parallel to earth-surface (m/s)
    • vl: leftward velocity, i.e. parallel to earth-surface (m/s)
    • vu: upward velocity, i.e. perpendicular to earth-surface (m/s)
    • ax: acceleration in x, i.e. in direction of vehicle front (m/s^2)
    • ay: acceleration in y, i.e. in direction of vehicle left (m/s^2)
    • az: acceleration in z, i.e. in direction of vehicle top (m/s^2)
    • af: forward acceleration (m/s^2)
    • al: leftward acceleration (m/s^2)
    • au: upward acceleration (m/s^2)
    • wx: angular rate around x (rad/s)
    • wy: angular rate around y (rad/s)
    • wz: angular rate around z (rad/s)
    • wf: angular rate around forward axis (rad/s)
    • wl: angular rate around leftward axis (rad/s)
    • wu: angular rate around upward axis (rad/s)
    • posacc: velocity accuracy (north/east in m)
    • velacc: velocity accuracy (north/east in m/s)
    • navstat: navigation status
    • numsats: number of satellites tracked by primary GPS receiver
    • posmode: position mode of primary GPS receiver
    • velmode: velocity mode of primary GPS receiver
    • orimode: orientation mode of primary GPS receiver

    To read the text file and interpret them properly an example is given in
    the matlab folder: First, use oxts = loadOxtsliteData(‘2011_xx_xx_drive_xxxx’)
    to read in the GPS/IMU data. Next, use pose = convertOxtsToPose(oxts) to
    transform the oxts data into local euclidean poses, specified by 4x4 rigid
    transformation matrices. For more details see the comments in those files.

    Coordinate Systems

    The coordinate systems are defined the following way, where directions
    are informally given from the drivers view, when looking forward onto
    the road:

    • Camera: x: right, y: down, z: forward
    • Velodyne: x: forward, y: left, z: up
    • GPS/IMU: x: forward, y: left, z: up

    All coordinate systems are right-handed.

    Sensor Calibration

    The sensor calibration zip archive contains files, storing matrices in
    row-aligned order, meaning that the first values correspond to the first
    row:

    calib_cam_to_cam.txt: Camera-to-camera calibration

    • S_xx: 1x2 size of image xx before rectification
    • K_xx: 3x3 calibration matrix of camera xx before rectification
    • D_xx: 1x5 distortion vector of camera xx before rectification
    • R_xx: 3x3 rotation matrix of camera xx (extrinsic)
    • T_xx: 3x1 translation vector of camera xx (extrinsic)
    • S_rect_xx: 1x2 size of image xx after rectification
    • R_rect_xx: 3x3 rectifying rotation to make image planes co-planar
    • P_rect_xx: 3x4 projection matrix after rectification

    Note: When using this dataset you will most likely need to access only
    P_rect_xx, as this matrix is valid for the rectified image sequences.

    calib_velo_to_cam.txt: Velodyne-to-camera registration

    • R: 3x3 rotation matrix
    • T: 3x1 translation vector
    • delta_f: deprecated
    • delta_c: deprecated

    R|T takes a point in Velodyne coordinates and transforms it into the
    coordinate system of the left video camera. Likewise it serves as a
    representation of the Velodyne coordinate frame in camera coordinates.

    calib_imu_to_velo.txt: GPS/IMU-to-Velodyne registration

    • R: 3x3 rotation matrix
    • T: 3x1 translation vector

    R|T takes a point in GPS/IMU coordinates and transforms it into the
    coordinate system of the Velodyne scanner. Likewise it serves as a
    representation of the GPS/IMU coordinate frame in Velodyne coordinates.

    example transformations

    As the transformations sometimes confuse people, here we give a short
    example how points in the velodyne coordinate system can be transformed
    into the camera left coordinate system.

    In order to transform a homogeneous point X = [x y z 1]’ from the velodyne
    coordinate system to a homogeneous point Y = [u v 1]’ on image plane of
    camera xx, the following transformation has to be applied:

    Y = P_rect_xx * R_rect_00 * (R|T)_velo_to_cam * X

    To transform a point X from GPS/IMU coordinates to the image plane:

    Y = P_rect_xx * R_rect_00 * (R|T)_velo_to_cam * (R|T)_imu_to_velo * X

    The matrices are:

    • P_rect_xx (3x4): rectfied cam 0 coordinates -> image plane
    • R_rect_00 (4x4): cam 0 coordinates -> rectified cam 0 coord.
    • (R|T)_velo_to_cam (4x4): velodyne coordinates -> cam 0 coordinates
    • (R|T)_imu_to_velo (4x4): imu coordinates -> velodyne coordinates

    Note that the (4x4) matrices above are padded with zeros and:
    R_rect_00(4,4) = (R|T)_velo_to_cam(4,4) = (R|T)_imu_to_velo(4,4) = 1.

    Tracklet Labels

    Tracklet labels are stored in XML and can be read / written using the
    C++/MATLAB source code provided with this development kit. For compiling
    the code you will need to have a recent version of the boost libraries
    installed.

    Each tracklet is stored as a 3D bounding box of given height, width and
    length, spanning multiple frames. For each frame we have labeled 3D location
    and rotation in bird’s eye view. Additionally, occlusion / truncation
    information is provided in the form of averaged Mechanical Turk label
    outputs. All tracklets are represented in Velodyne coordinates.

    Object categories are classified as following:

    • ‘Car’
    • ‘Van’
    • ‘Truck’
    • ‘Pedestrian’
    • ‘Person (sitting)’
    • ‘Cyclist’
    • ‘Tram’
    • ‘Misc’

    Here, ‘Misc’ denotes all other categories, e.g., ‘Trailers’ or ‘Segways’.

    Reading the Tracklet Label XML Files

    This toolkit provides the header ‘cpp/tracklets.h’, which can be used to
    parse a tracklet XML file into the corresponding data structures. Its usage
    is quite simple, you can directly include the header file into your code
    as follows:

    #include “tracklets.h”
    Tracklets *tracklets = new Tracklets();
    if (!tracklets->loadFromFile(filename.xml))


    delete tracklets;

    In order to compile this code you will need to have a recent version of the
    boost libraries installed and you need to link against
    ‘libboost_serialization’.

    ‘matlab/readTrackletsMex.cpp’ is a MATLAB wrapper for ‘cpp/tracklets.h’.
    It can be build using make.m. Again you need to link against
    ‘libboost_serialization’, which might be problematic on newer MATLAB
    versions due to MATLAB’s internal definitions of libstdc, etc. The latest
    version which we know of which works on Linux is 2008b. This is because
    MATLAB has changed its pointer representation.

    Of course you can also directly parse the XML file using your preferred
    XML parser. If you need to create another useful wrapper for the header file
    (e.g., for Python) we would be more than happy if you could share it with us).

    Demo Utility for projecting Tracklets into Images

    In ‘matlab/run_demoTracklets.m’ you find a demonstration script that reads
    tracklets and projects them as 2D/3D bounding boxes into the images. You
    will need to compile the MATLAB wrapper above in order to read the tracklets.
    For further instructions, please have a look at the comments in the
    respective MATLAB scripts and functions.

    展开全文
  • Raw data JSON API

    2021-01-12 11:53:08
    <div><p>Hello , first of all, thanks for...<p>I just wanted to ask you if there has been plans to have an API for the raw data as JSON <p>Regards</p><p>该提问来源于开源项目:mikecao/umami</p></div>
  • lazy loading of raw data

    2020-12-09 01:54:12
    I imagine reading just the metadata first (potentially using the tdms_index if present) and on calling channel_data only load the raw data of this channel. Any thoughts?</p><p>该提问来源于开源项目&#...
  • Xmp allow raw data

    2020-12-28 10:16:03
    ** I am not beholden to using <code>byte[]</code> for the raw data. I would be happy to change it to <code>XDocument</code> if it would make more sense. This seemed the easiest and most flexible way ...
  • <div><p>When importing mzTab there is a checkbox to also import the raw data. In my case, on windows, when the .raw and .mzTab are in the same directory, MZmine seems to not find the appropriate .raw ...
  • VTK读取rawdata

    2019-12-13 08:53:53
    VTK 读取本地rawdata数据体渲染rawdata读取IntPtr传入rawData数据 最近项目中要求将设备采集的数据以三维的形式显示出来,经过检索发现VTK的体渲染能够满足项目需求。 在先对设备采集的数据做一些描述:设备每帧...


    最近项目中要求将设备采集的数据以三维的形式显示出来,经过检索发现VTK的体渲染能够满足项目需求。
    现在先对设备采集的数据做一些描述:设备每帧采集320根线,每根线由1024个点组成,每个点保存为无符号字符型(unsigned char),每次采集320帧。也即每次采集320 x 1024 x 320 x sizeof(unsigned char)数据量的rawdata。

    体渲染rawdata读取

    VTK 提供了很多的图像读取类以读取相应的图像格式(如 vtkPNGReader),经过检索,要读取项目中的这种数据我用到了vtkImageReader,下面代码展示了读取方式(C#):

             vtkImageReader reader = vtkImageReader.New();
             reader.SetFileName("rawdata_path.bin");
             reader.SetNumberOfScalarComponents(1);
             reader.SetDataScalarTypeToUnsignedChar();//将数据设定为unsigned char型                
             reader.SetFileDimensionality(3);//设置显示图像的维数                
             reader.SetDataExtent(0, 319, 0, 1023, 0, 319);
             //图片属性图片像素320*1024,最后两参数表示有320张图
             //故x方向索引为0~319,y:0~1023,z:0~319
             reader.SetDataSpacing(1, 0.5, 1); 
             //设置x,y,z像素间间距,为了不让生成的模型太长,把y方向做了压缩
             reader.update();

    之后就可以将reader.GetOutputPort()加入到后续处理管线当中。但此种方法在尝试后发现即便调用reader.Dispose();释放,程序仍然占有rawdata_path.bin文件,在后续程序其他位置继续读取该文件时就会导致读取失败。不知道是什么原因。。。

    IntPtr传入rawData数据

    为了解决上面提到的文件占用问题,我决定不直接调用VTK的vtkImageReader,改为先将数据读到内存中,再将数据传给VTK,以下为相应C#代码( 其中:rawDataIntPtr 为上面rawdata_path.bin文件读取到内存的句柄,具体读取方法略):

             vtkImageImport imageImport = vtkImageImport.New();
             imageImport.SetImportVoidPointer(rawDataIntPtr);
             imageImport.SetDataScalarTypeToUnsignedChar();
             imageImport.SetWholeExtent(0, 319, 0, 1023, 0, 319);
             imageImport.SetDataSpacing(1, 0.5, 1); 
             imageImport.SetDataExtentToWholeExtent();         
             imageImport.Update();

    之后就可以将 imageImport.GetOutputPort() 加入到后续处理管线当中。

    展开全文
  • 17.1.1.6 Creating a Data Snapshot Using Raw Data Files 创建一个数据快照使用 Raw Data Files 如果数据库是大的, 复制raw 数据文件可以变的更加有效相比使用mysqldump 和导入文件在每个slave上。 这个即使跳...
    17.1.1.6 Creating a Data Snapshot Using Raw Data Files  创建一个数据快照使用 Raw Data Files 
    
    
    如果数据库是大的, 复制raw 数据文件可以变的更加有效相比使用mysqldump 和导入文件在每个slave上。
    
    
    这个即使跳过更新索引的负载
    
    
    
    使用这种方法 表在存储引擎具有复杂的caching和logging 算法需要额外的步骤来产生一个完全的时间点快照。
    
    
    初始的copy命令需要留下cache信息和记录更新,即使你需要的一个全局的读锁,
    
    
    然而存储引擎响应这个需要他的crash recovery 能力
    
    
    这个方法 不能可靠工作 如果master 和slave 有不同的值 对于r ft_stopword_file, ft_min_word_len, or ft_max_word_len 
    
    
    如果你使用InnoDB 表,你可以使用mysqlbackup 命令来产生一个一致性备份。
    
    这个命令记录log name和偏移量 随后在slave上使用。
    
    
    否则,使用cold backup 技术来得到一个可靠的2进制快照,复制所有的数据文件在做一个slow shutdown 后
    
    
    
    创建一个新的MyISAM tables的一个raw 数据快照,你可以使用标准copy 工具比如cp或者copy,
    
    
    一个远程copy 工具比如scp或者rsync,一个规定工具比如 zip or tar, 或者一个文件系统快照工具比如dump,
    
    
    MySQL 数据文件存在一个单独的文件系统,如果你是只复制某个数据库,只需要拷贝那边表相关的文件
    
    (对于InnODB,所有的表在所有的数据库是存在 system tablespace files, 除非你启用innodb_file_per_table option enabled.) 
    
    
    你可能需要制定排除下面的文件从你的归档:
    
    
    1.mysql 数据库文件相关的
    
    2.The master info repository file, i
    
    3.master的binary log 文件
    
    4.任何relay log 文件
    
    
    要获得最一致状态的raw data 快照, 关闭master server 如下:
    
    1.需要一个read lock得到master 的状态信息
    
    2.在一个单独的session,关闭master server
    

    展开全文
  • s to save the image to disk it is better for us to return raw data instead of base64 data. This will save some encoding time and filespace.</p><p>该提问来源于开源项目:owencm/javascript-jpeg-...
  • Efficient TOPS Mode Raw Data Simulator of Extended Scenes

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 25,383
精华内容 10,153
关键字:

rawdata