精华内容
下载资源
问答
  • snapshot

    2018-10-22 22:36:00
    ES,Hbase,RocksDB等在数据备份时都用到了snapshot技术。下面是查到的snapshot一些资料和自己的一些理解。    snapshot有多种实现方法,这里只谈谈“写时复制COW”,不是奶牛哦,是“Copy-On-Write”  当一个...

      ES,Hbase,RocksDB等在数据备份时都用到了snapshot技术。下面是查到的snapshot一些资料和自己的一些理解。

      

      snapshot有多种实现方法,这里只谈谈“写时复制COW”,不是奶牛哦,是“Copy-On-Write”

      当一个snapshot创建的时候,仅拷贝原始卷里的源数据,这不是物理上的数据拷贝,因此snapshot的创建特别快,当原始卷里的数据有写入时,备份卷开始记录原始卷哪些数据发生了变化,然后在原始卷新数据覆盖旧数据时,将旧数据拷贝到snapshot的预留空间里,起到备份数据的作用,就保证了所有数据和创建备份卷之前的数据一致性。

      而对于snapshot的读操作,如果是读取数据块是没有修改过的,那么会将读操作直接重定向到原始卷上,如果是要读取已经修改过的块,那么就读取拷贝到snapshot中的块。所以当原始卷破坏了之后还能用snapshot备份的数据还原。

      

      镜像分离,是为了让镜像卷保持拆分一瞬间的状态,而不再继续被写入数据。而拆分之后,主卷所做的所有写IO动作,会以bitmap的方式记录下来。bitmap就是一份位图文件,文件中每个位都表示卷上的一个块(扇区,或者由多个扇区组成的逻辑块),如果这个块在镜像分离之后,被写入了数据,则程序就将bitmap文件中对应的位从0变成1。待备份完成之后,可以将镜像关系恢复,此时主卷和镜像卷上的数据是不一致的,需要重新做同步。程序搜索bitmap中所有为1的位,对应到卷上的块,然后将这些块上的数据,同步到镜像卷,从而恢复实时镜像关系。

      快照创建成功后,源和快照共享同一份物理数据拷贝,直到数据发生写操作,此时源上老数据或者新增数据将被写向新的存储空间。为了记录和追踪块的变化和复制信息,需要一个位图(bitmap),它用于确定实际拷贝数据的位置,以及确定从源还是目标来获取数据。

     

      当即时拷贝执行时,没有数据被复制。取而代之,它创建一个位图来记录数据的复制情况,并在后台进行真正的数据物理复制。

      写时复制快照在快照时间点之后,没有物理数据复制发生,仅仅复制了原始数据物理位置的元数据。因此,快照创建非常快,可以瞬间完成。然后,快照副本跟踪原始卷的数据变化(即原始卷写操作),一旦原始卷数据块发生写操作,则先将原始卷数据块读出并写入快照卷,然后用新数据块覆盖原始卷。这样我们访问快照卷上的数据仍旧是写操作前的,可以保证我们备份数据的一致性。

     

      采取COW实现方式时,snapshot的大小并不需要和原始卷一样大。那设置成多大呢?第一、根据原始卷数据的改变大小范围来设置;第二、根据原始卷数据的更新频率来定。一旦 snapshot的空间记录满了原始卷块变换的信息,那么这个snapshot就无法使用了。当然,如果你的snapshot大小和原始卷一样大,甚至还要大,那snapshot备份就绝对的不会崩溃啦。

      http://www.cnetnews.com.cn/2010/0625/1788019.shtml 查看快照snapshot的一些其他技术。

     

            snapshot的实现有些类似硬链接,会有短暂的文件系统锁定,程序的暂停,用于建立位图,这个过程是很快的。

     

       做多次 snapshot 时,第一次之后的snapshot 都只记录变化的部分,而不是全部导出。这其实就实现了增量备份。

       使用备份数据,用户可以在异常发生的情况下快速回滚到指定快照点。建议至少每天执行一次snapshot来保存数据的快照记录,并且定期清理过期快照,这样如果业务发生重要错误需要回滚的话是可以回滚到之前的一个快照点的。

       定时做全量备份与每天增量备份。

    转载于:https://www.cnblogs.com/lnlvinso/p/9833611.html

    展开全文
  • Snapshot issue

    2020-12-30 12:56:31
    E0125 00:01:16.711125 1 snapshot_controller.go:310] createSnapshot [create-default/rbd-pvc-snapshot[56206229-2034-11e9-816a-1062eb3456cc]]: error occurred in createSnapshotOperation: failed to take ...
  • (snapshot1 is created during recording while snapshot2 is created using snapshot1 + non-det.log) I am guessing that during replay, certain information about network devices are not initialized ...
  • vagrant snapshot

    2020-11-22 16:30:22
    vagrant snapshot take [vm-name] [NAME] # take snapshot, labeled by NAME vagrant snapshot list [vm-name] # list snapshots vagrant snapshot back [vm-name] # restore last taken snapshot vagrant ...
  • Snapshot import

    2021-01-09 21:44:35
    s warp snapshot <p>The way to try it: 1. Download snapshot using this branch https://github.com/ethereum/cpp-ethereum/pull/4227 or using Parity (it puts it some dir like Parity/Ethereum/chains/...
  • Snapshot support

    2020-12-09 05:20:11
    ).snapshot('snapshot').exists? client.repository('repo').snapshot('snapshot').get client.repository('repo').snapshot('snapshot').status client.repository('...
  • Currently, the code hardcodes the snapshot description to empty string which I believe is inflexible as users would like the ability to enter a snapshot description to describe the snapshot they have...
  • Snapshot ui

    2020-11-30 07:13:48
    - [x] Add error notification for <code>2015-04-23 00:53:53 [kloud] ERROR [koding][55382e90affc39b3462467d2] createSnapshot error: total snapshot limit has been reached. Plan limit: 0</code></p>该提问...
  • Snapshot Issues

    2020-12-26 23:48:28
    <p>After the doorbell (Ring Doorbell 2) has a motion alert it is unable to retrieve a snapshot. Eventually, get these errors: [6/30/2020, 9:08:07 AM] [Ring] Snapshot failed to refresh after 70 ...
  • <p>Aptly is smart enough to rename a snapshot in every place it is mentioned. Unfortunately there is at least one place where an old name is kept. It is the source of merged snapshot. <h2>Detailed ...
  • <div><p>The snapshot feature is a great enhancement, however the ability to merge the snapshot with the layer below it would i think be a great enhancement, it addition to the delete function. ...
  • <div><p>The snapshot and snapshot metadata are inconsistent. <p>Step 1: Synchronize and obtain snapshot metadata, at this time metadata=META-T1, state machine=FSM-T1; Step 2: Synchronize and ...
  • I 2020-03-19T13:48:44.880030631Z 2020-03-19 13:48:44.879 [] [Broker-1-StateReplication-3] DEBUG io.zeebe.logstreams.snapshot - Consume snapshot chunk 000183.sst of snapshot 1211899-2-1584625722347 ...
  • <div><p>Please review code changes implemented for vSphere snapshot management resources. <p>Resources implemented are : a. Snapshot management (Create and delete snapshot) b. Revert snapshot <p>...
  • SNAPSHOT命令行

    2020-05-15 12:06:28
    list/snapshot命令行 列表类型:在sequoiaDB中,列表是一种轻量级的得到系统当前状态的命令,主要分为以下类型 db = new Sdb() db.list(SDB_LIST_) 快照类型:快照是一种得到系统当前状态的命令,主要分为以下类型...

    list/snapshot命令行
    列表类型:在sequoiaDB中,列表是一种轻量级的得到系统当前状态的命令,主要分为以下类型
    在这里插入图片描述db = new Sdb()
    db.list(SDB_LIST_)
    在这里插入图片描述
    快照类型:快照是一种得到系统当前状态的命令,主要分为以下类型:
    在这里插入图片描述db= new Sdb()
    db.snapshot(SDB_SNAP_
    )
    在这里插入图片描述使用SQL的方式去访问
    在这里插入图片描述sdb 'db.exec(“select * from $SNAPSHOT_SESSION where NodeName in(”*")")
    在这里插入图片描述

    展开全文
  • 测试HBase数据迁移时出现MR OOM问题,在这里..../hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot click-count-snp -copy-to hdfs://172.16.201.42:8020/hbase -mappers 10 -bandwidth 20 201...

    测试HBase数据迁移时出现MR OOM问题,在这里记录一下解决方法,以防忘记

    异常信息
    ./hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot click-count-snp -copy-to hdfs://172.16.201.42:8020/hbase -mappers 10 -bandwidth 20
    
    2019-07-09 16:01:29,407 INFO  [main] snapshot.ExportSnapshot: Copy Snapshot Manifest
    2019-07-09 16:01:29,722 WARN  [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
    2019-07-09 16:01:30,793 INFO  [main] snapshot.ExportSnapshot: Loading Snapshot 'click-count-snp' hfile list
    2019-07-09 16:01:31,066 INFO  [main] mapreduce.JobSubmitter: number of splits:1
    2019-07-09 16:01:31,173 INFO  [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1562659148784_0003
    2019-07-09 16:01:31,588 INFO  [main] impl.YarnClientImpl: Submitted application application_1562659148784_0003
    2019-07-09 16:01:31,615 INFO  [main] mapreduce.Job: The url to track the job: http://nn1:8088/proxy/application_1562659148784_0003/
    2019-07-09 16:01:31,616 INFO  [main] mapreduce.Job: Running job: job_1562659148784_0003
    2019-07-09 16:01:37,758 INFO  [main] mapreduce.Job: Job job_1562659148784_0003 running in uber mode : false
    2019-07-09 16:01:37,759 INFO  [main] mapreduce.Job:  map 0% reduce 0%
    2019-07-09 16:01:41,811 INFO  [main] mapreduce.Job: Task Id : attempt_1562659148784_0003_m_000000_0, Status : FAILED
    Error: Java heap space
    2019-07-09 16:01:45,851 INFO  [main] mapreduce.Job: Task Id : attempt_1562659148784_0003_m_000000_1, Status : FAILED
    Error: Java heap space
    2019-07-09 16:01:49,872 INFO  [main] mapreduce.Job: Task Id : attempt_1562659148784_0003_m_000000_2, Status : FAILED
    Error: Java heap space
    2019-07-09 16:01:54,895 INFO  [main] mapreduce.Job:  map 100% reduce 0%
    2019-07-09 16:01:54,903 INFO  [main] mapreduce.Job: Job job_1562659148784_0003 failed with state FAILED due to: Task failed task_1562659148784_0003_m_000000
    Job failed as tasks failed. failedMaps:1 failedReduces:0
    
    2019-07-09 16:01:55,004 INFO  [main] mapreduce.Job: Counters: 11
    	Job Counters
    		Failed map tasks=4
    		Launched map tasks=4
    		Other local map tasks=4
    		Total time spent by all maps in occupied slots (ms)=10358
    		Total time spent by all reduces in occupied slots (ms)=0
    		Total time spent by all map tasks (ms)=10358
    		Total vcore-seconds taken by all map tasks=10358
    		Total megabyte-seconds taken by all map tasks=10606592
    	Map-Reduce Framework
    		CPU time spent (ms)=0
    		Physical memory (bytes) snapshot=0
    		Virtual memory (bytes) snapshot=0
    2019-07-09 16:01:55,007 ERROR [main] snapshot.ExportSnapshot: Snapshot export failed
    org.apache.hadoop.hbase.snapshot.ExportSnapshotException: Copy Files Map-Reduce Job failed
    	at org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:804)
    	at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:997)
    	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    	at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:1071)
    	at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1075)
    

    一直在报Java heap space,本来我是开发集群,只有三台机器,而且没台都有一些关键服务,所以只能从设置MR参数入手,最后加了如下参数后可以正确执行。-Dmapreduce.map.memory.mb=4096 -Dmapreduce.map.java.opts=-Xmx3686m

    ./hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -Dmapreduce.map.memory.mb=4096 -Dmapreduce.map.java.opts=-Xmx3686m -snapshot click-count-snp -copy-to hdfs://172.16.201.42:8020/hbase -mappers 10 -bandwidth 20
    
    2019-07-09 16:24:02,886 INFO  [main] snapshot.ExportSnapshot: Copy Snapshot Manifest
    2019-07-09 16:24:03,193 WARN  [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
    2019-07-09 16:24:04,217 INFO  [main] snapshot.ExportSnapshot: Loading Snapshot 'click-count-snp' hfile list
    2019-07-09 16:24:04,475 INFO  [main] mapreduce.JobSubmitter: number of splits:1
    2019-07-09 16:24:04,603 INFO  [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1562659148784_0006
    2019-07-09 16:24:05,024 INFO  [main] impl.YarnClientImpl: Submitted application application_1562659148784_0006
    2019-07-09 16:24:05,054 INFO  [main] mapreduce.Job: The url to track the job: http://nn1:8088/proxy/application_1562659148784_0006/
    2019-07-09 16:24:05,054 INFO  [main] mapreduce.Job: Running job: job_1562659148784_0006
    2019-07-09 16:24:11,179 INFO  [main] mapreduce.Job: Job job_1562659148784_0006 running in uber mode : false
    2019-07-09 16:24:11,180 INFO  [main] mapreduce.Job:  map 0% reduce 0%
    2019-07-09 16:24:16,233 INFO  [main] mapreduce.Job:  map 100% reduce 0%
    2019-07-09 16:24:16,241 INFO  [main] mapreduce.Job: Job job_1562659148784_0006 completed successfully
    2019-07-09 16:24:16,338 INFO  [main] mapreduce.Job: Counters: 37
    	File System Counters
    		FILE: Number of bytes read=0
    		FILE: Number of bytes written=158069
    		FILE: Number of read operations=0
    		FILE: Number of large read operations=0
    		FILE: Number of write operations=0
    		HDFS: Number of bytes read=6998
    		HDFS: Number of bytes written=6797
    		HDFS: Number of read operations=5
    		HDFS: Number of large read operations=0
    		HDFS: Number of write operations=2
    	Job Counters
    		Launched map tasks=1
    		Other local map tasks=1
    		Total time spent by all maps in occupied slots (ms)=11704
    		Total time spent by all reduces in occupied slots (ms)=0
    		Total time spent by all map tasks (ms)=2926
    		Total vcore-seconds taken by all map tasks=2926
    		Total megabyte-seconds taken by all map tasks=11984896
    	Map-Reduce Framework
    		Map input records=1
    		Map output records=0
    		Input split bytes=201
    		Spilled Records=0
    		Failed Shuffles=0
    		Merged Map outputs=0
    		GC time elapsed (ms)=71
    		CPU time spent (ms)=740
    		Physical memory (bytes) snapshot=325951488
    		Virtual memory (bytes) snapshot=5925187584
    		Total committed heap usage (bytes)=300941312
    	org.apache.hadoop.hbase.snapshot.ExportSnapshot$Counter
    		BYTES_COPIED=6797
    		BYTES_EXPECTED=6797
    		BYTES_SKIPPED=0
    		COPY_FAILED=0
    		FILES_COPIED=1
    		FILES_SKIPPED=0
    		MISSING_FILES=0
    	File Input Format Counters
    		Bytes Read=0
    	File Output Format Counters
    		Bytes Written=0
    2019-07-09 16:24:16,341 INFO  [main] snapshot.ExportSnapshot: Finalize the Snapshot Export
    2019-07-09 16:24:16,348 INFO  [main] snapshot.ExportSnapshot: Verify snapshot integrity
    2019-07-09 16:24:16,378 INFO  [main] snapshot.ExportSnapshot: Export Completed: click-count-snp
    
    展开全文
  • Snapshot Array

    2020-05-14 01:20:11
    Implement a SnapshotArray that supports the following interface: SnapshotArray(int length)initializes an array-like data structure with the given length.Initially, each element equals 0. void set...

    Implement a SnapshotArray that supports the following interface:

    • SnapshotArray(int length) initializes an array-like data structure with the given length.  Initially, each element equals 0.
    • void set(index, val) sets the element at the given index to be equal to val.
    • int snap() takes a snapshot of the array and returns the snap_id: the total number of times we called snap() minus 1.
    • int get(index, snap_id) returns the value at the given index, at the time we took the snapshot with the given snap_id

    Example 1:

    Input: ["SnapshotArray","set","snap","set","get"]
    [[3],[0,5],[],[0,6],[0,0]]
    Output: [null,null,0,null,5]
    Explanation: 
    SnapshotArray snapshotArr = new SnapshotArray(3); // set the length to be 3
    snapshotArr.set(0,5);  // Set array[0] = 5
    snapshotArr.snap();  // Take a snapshot, return snap_id = 0
    snapshotArr.set(0,6);
    snapshotArr.get(0,0);  // Get the value of array[0] with snap_id = 0, return 5
    

    Constraints:

    • 1 <= length <= 50000
    • At most 50000 calls will be made to setsnap, and get.
    • 0 <= index < length
    • 0 <= snap_id < (the total number of times we call snap())
    • 0 <= val <= 10^9

    思路:这题的考点在于,如果存整个array的历史信息,会爆空间,其实没有必要存所有的信息,只用存array ith元素的历史信息,这样每个点,用treemap去存,treemap可以按照key的顺序存储,search是log(n)的。

    class SnapshotArray {
        private List<TreeMap<Integer, Integer>> list;
        private int id = 0;
        public SnapshotArray(int length) {
            list = new ArrayList<>();
            for(int i = 0; i < length; i++) {
                list.add(new TreeMap<Integer, Integer>());
                list.get(i).put(0,0);
            }
        }
        
        public void set(int index, int val) {
            list.get(index).put(id, val);
        }
        
        public int snap() {
            return id++;
        }
        
        public int get(int index, int snap_id) {
            return list.get(index).floorEntry(snap_id).getValue();
        }
    }
    
    /**
     * Your SnapshotArray object will be instantiated and called as such:
     * SnapshotArray obj = new SnapshotArray(length);
     * obj.set(index,val);
     * int param_2 = obj.snap();
     * int param_3 = obj.get(index,snap_id);
     */

     

    展开全文
  • HDFS SnapShot原理

    万次阅读 2017-03-26 20:42:36
    【简介】 HDFS中可以对目录创建Snapshot,创建之后不管后续目录发生什么变化,都可以通过...hdfs dfsadmin -allowSnapshot 启用snapshot功能后,并不会自动进行snapshot保存,还需要先创建snapshot, 通过下面的
  • vagrant snapshot go NAME # restores to snapshot NAME </code></pre> <p>I'd like to merge "go" and "back" together, to have a more minimalistic API: <pre><code> vagrant snapshot # ...
  • <h1>2227: - feature: refact snapshot related code - feature: rbd snapshot - bugfix: ceph cache iso image - delete disk create snapshot and guest create snapshot - fix auto snapshot - fix disk reset...
  • maven SNAPSHOT

    2020-05-21 11:32:11
    MAVEN 有RELEASE版本 跟 SNAPSHOT版本机制: RELEASE版本机制 先检查本地仓库是否有依赖的包,如果没有就去中央仓库或远程私有仓库进行下载。如果本地仓库已经有的话,不论远程私有仓库(MAVEN私服)是否有更新都...
  • Snapshot volume

    2017-10-10 15:04:59
    输入snaoshot的name后确认后就可以生产snapshot 可以看到已经生成了name为tiantao的snapshot。点击create volume就可以从这个snapshot回溯到原来的volume 对应的cinder-api的log 对应的源码 ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 54,667
精华内容 21,866
关键字:

snapshot