精华内容
下载资源
问答
  • superblock

    2014-11-14 10:17:51
    关于SUPERBLOCK的简要概述,是一种优化的闪存存储方式
  • Superblock

    2016-10-18 09:35:00
    Superblock From Linux Raid Wiki Jump to: navigation, search This page is obsolete; see RAID superblock formats Linux raid reserves a bit of space (called a superblock) on each component d....

    Superblock

    From Linux Raid Wiki
     
    Jump to: navigation, search

    This page is obsolete; see RAID superblock formats

    Linux raid reserves a bit of space (called a superblock) on each component device. This space holds metadata about the RAID device and allows correct assembly of the array.

    There are several versions of superblocks but they can be split into 3 groups:

    • ancient (pre-0.9)
    • 0.9
    • 1.0 to 1.2

    The on-disk superblock formats info can be found here: RAID_superblock_formats

    The ancient superblocks are out of scope for this wiki and aren't used by mdadm and md.

    1.x superblocks

    1.x superblocks are new(ish)

    The version numbers simply indicate where the superblock is stored on the individual component devices.

    Version 1.0 is stored near the end of the device (at least 8K, and less than 12K, from the end). This is useful, especially with RAID-0 and RAID-10 devices, because the RAID filesystem and non-RAID filesystem starts in exactly the same place, so if you can't get RAID up you can often mount the thing directly in readonly mode and get the data off. But it has the tiny risk that writing off the end of the filesystem (when it's mounted without going through md like that) will wreck your array.

    Version 1.1 is stored at the start of the device. This eliminates the overwriting thing but stops you from directly mounting without going through md.

    Version 1.2 is like version 1.1 but stores the superblock 4K from the device start. This is particularly useful if you make a RAID array on a whole device, because this misses partition tables, master boot records, and the like.

    In practice they're nearly identical, sharing nearly all of their code:

    Do note that the in-kernel autodetect (based on partition types FD) only works for version 0.90 superblocks.

    As a workaround, distributions, Ubuntu and Fedora at least, circa early 2009, include init scripts that run any arrays that aren't started by auto-detect, which can include arrays using the newer 1.x superblocks.

    Using Fedora 9 as the example, the initscript file that does this is named /etc/rc.d/rc.sysinit. The command used is:

       # Start any MD RAID arrays that haven't been started yet
       [ -f /etc/mdadm.conf -a -x /sbin/mdadm ] && /sbin/mdadm -As --auto=yes --run
    

    The kernel automounter can't mount any of the version 1.x superblocks, and LILO can't boot off them.

    Version 1 superblocks allow for an arbitrarily large internal bitmap. It does this by explicitly given the data-start and data-size. So e.g. a 1.1 superblock could make

       data-start==1Gig
       data-size == devicesize minus 1Gig
    

    and put the bitmap after the superblock (which is at the start) and use nearly one Gig for the bitmap.

    However mdadm isn't quite so accommodating. It should:

    • when creating an array without a bitmap, leave a reasonable amount of space for one to be added in the future (32-64k).
    • when dynamically adding a bitmap, see how much space is available and use up to that much
    • when creating an array with a bitmap, honour any --bitmap-chunk-size or default and reserve an appropriate amount of space.

    I think it might do the first. I think it doesn't do the second two. Maybe in 2.6 .or 2.7

    转载于:https://www.cnblogs.com/2ne1/p/5972086.html

    展开全文
  • Ext Superblock.tpl

    2021-02-06 12:06:22
    Ext Superblock.tpl
  • SuperBlock损坏修复

    千次阅读 2014-07-01 14:33:54
    SuperBlock损坏修复

       源地址:http://blog.sina.com.cn/s/blog_709df8c80100ldup.html

    什么是superblock?
    详细的介绍superblock组成 构成
     
    大家可能遇到过这样的情况:
    [root@dhcp-0-142 ~]# mount /dev/sdb1 /mnt/sdb1
    mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
    missing codepage or other error
    In some cases useful info is found in syslog - try
    dmesg | tail or so
    [root@dhcp-0-142 ~]#
     
    这种情况一般为superblock损坏的概率很大。
     
    例如我现在/dev/sdb1无法挂载 现象如上,可以这样操作:
     
    [root@dhcp-0-142 ~]# dumpe2fs /dev/sdb1
    dumpe2fs 1.39 (29-May-2006)
    Filesystem volume name: <none>
    Last mounted on: <not available>
    Filesystem UUID: c3800df3-5c34-4b53-8144-94029b5736d8
    Filesystem magic number: 0xEF53
    Filesystem revision #: 1 (dynamic)
    Filesystem features: has_journal resize_inode dir_index filetype sparse_super
    Default mount options: (none)
    Filesystem state: clean with errors
    Errors behavior: Continue
    Filesystem OS type: Linux
    Inode count: 0
    Block count: 0
    Reserved block count: 0
    Free blocks: 20926971
    Free inodes: 4705752
    First block: 1
    Block size: 1024
    Fragment size: 1024
    Reserved GDT blocks: 256
    Blocks per group: 8192
    Fragments per group: 8192
    Inodes per group: 2008
    Inode blocks per group: 251
    Filesystem created: Tue Oct 7 19:18:08 2008
    Last mount time: n/a
    Last write time: Tue Oct 7 19:29:39 2008
    Mount count: 0
    Maximum mount count: 20
    Last checked: Tue Oct 7 19:18:08 2008
    Check interval: 15552000 (6 months)
    Next check after: Sun Apr 5 19:18:08 2009
    Reserved blocks uid: 0 (user root)
    Reserved blocks gid: 0 (group root)
    First inode: 11
    Inode size: 128
    Journal inode: 8
    Default directory hash: tea
    Directory Hash Seed: 7f7e1c41-5cae-4f23-9873-877991751ccb
    Journal backup: inode blocks
    dumpe2fs: Illegal inode number while reading journal inode
    [root@dhcp-0-142 ~]#
    所以我做如下操作:
    [root@dhcp-0-142 ~]# fsck -b 8193 /dev/sdb1
    fsck 1.39 (29-May-2006)
    e2fsck 1.39 (29-May-2006)
    /dev/sdb1 was not cleanly unmounted, check forced.
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information

    /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/sdb1: 11/26104 files (9.1% non-contiguous), 8966/104388 blocks
    [root@dhcp-0-142 ~]# mount /dev/sdb1 /mnt/sdb1
    [root@dhcp-0-142 ~]# ls /mnt/sdb1
    lost+found
    [root@dhcp-0-142 ~]#
     

    superblock已经修复,文件系统挂载正常。 这么做是因为ext2/3文件系统在创建文件系统的时候会创建若干个superblock的备份存放于特定位置。
     
    [root@dhcp-0-142 ~]# dumpe2fs /dev/sdb1 | grep --before-context=1 superblock
    dumpe2fs 1.39 (29-May-2006)
    Group 0: (Blocks 1-8192)
    Primary superblock at 1, Group descriptors at 2-2
    --
    Group 1: (Blocks 8193-16384)
    Backup superblock at 8193, Group descriptors at 8194-8194
    --
    Group 3: (Blocks 24577-32768)
    Backup superblock at 24577, Group descriptors at 24578-24578
    --
    Group 5: (Blocks 40961-49152)
    Backup superblock at 40961, Group descriptors at 40962-40962
    --
    Group 7: (Blocks 57345-65536)
    Backup superblock at 57345, Group descriptors at 57346-57346
    --
    Group 9: (Blocks 73729-81920)
    Backup superblock at 73729, Group descriptors at 73730-73730
    [root@dhcp-0-142 ~]#

    从上面操作可以看出,在第1、3、4、7、9这几个Block Group上存放有superblock备份。
    什么是Block Group?ext2/3文件系统为了提高磁盘寻道效率,把inode table等信息按照Inodes per group分成若干组存放,而没有全部放在一起。 
     
    Inodes per group信息相见命令:
    [root@dhcp-0-142 ~]# dumpe2fs /dev/sdb1 | grep 'Inodes per group'
    dumpe2fs 1.39 (29-May-2006)
    Inodes per group: 2008
    [root@dhcp-0-142 ~]#
    有些文件系统superblock损坏的很厉害。连dumpe2fs和tune2fs都看不到信息。
    [root@dhcp-0-175 ~]# dd if=/dev/zero of=/dev/sdb1 bs=1 count=1024 seek=1024
    1024+0 records in
    1024+0 records out
    1024 bytes (1.0 kB) copied, 0.0228272 seconds, 44.9 kB/s
    [root@dhcp-0-175 ~]# dumpe2fs /dev/sdb1
    dumpe2fs 1.39 (29-May-2006)
    dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb1
    Couldn't find valid filesystem superblock.
    [root@dhcp-0-175 ~]# tune2fs -l /dev/sdb1
    tune2fs 1.39 (29-May-2006)
    tune2fs: Bad magic number in super-block while trying to open /dev/sdb1
    Couldn't find valid filesystem superblock.
    [root@dhcp-0-175 ~]#
    这时候我们根本无法从dumpe2fs和tune2fs看到Backup superblock的位置。
    我们可以尝试从superblock的结构来猜测superblock的位置(superblock结构见上图)。
    我们从superblock结构可以知道,卷标volume name存放于superblock中。所以如果文件系统有设置卷标,那么我们可以尝试从卷标来定位superblock。

    我们用hexdump把文件系统dump出来:
    [root@dhcp-0-175 ~]# hexdump -C /dev/sdb1 > /var/sdb1.hexdump
    [root@dhcp-0-175 ~]#

    我们已知 /dev/sdb1的卷标是sdb1(如果不知道卷标或者没有设置卷标,那么我就没办法了)。
    我们搜索sdb1,搜到结果如下: 
     
    我们猜测这里就是备份superblock的位置。
    卷标起始位置是0x18000078。由于superblock结构里volume name位于0x78的位置,所以我们可以猜测备份superblock的起始位置是0x18000078 – 0x78 = 0x18000000。
    由于blocksize位于superblock的[0x18, 0x22)的位置,这里的值是0x0002,得出blocksize是0x0400 x ( 0x00020x0002 ) = 0x1000 = 4096
    ( [0x18, 0x22) 处值n和blocksize的关系是 blocksize = 0x0400 x 0x0002n)
    而备份superblock的偏移量为offset / blocksize,即0x18000000 / 0x1000 = 0x00018000 = 98304。

    因此我们执行:
    [root@dhcp-0-175 ~]# fsck.ext3 -b 98304 /dev/sdb1
    e2fsck 1.39 (29-May-2006)
    sdb1 was not cleanly unmounted, check forced.
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information

    sdb1: ***** FILE SYSTEM WAS MODIFIED *****
    sdb1: 11/123648 files (9.1% non-contiguous), 8298/246991 blocks
    [root@dhcp-0-175 ~]#

    这样文件系统就有给修复的可能性了。
    测试一下:
    [root@dhcp-0-175 ~]# dumpe2fs /dev/sdb1
    dumpe2fs 1.39 (29-May-2006)
    Filesystem volume name: sdb1
    Last mounted on: <not available>
    Filesystem UUID: 0293bd85-b911-43bf-853e-6588b3eaaf39
    Filesystem magic number: 0xEF53
    Filesystem revision #: 1 (dynamic)
    Filesystem features: has_journal resize_inode dir_index filetype sparse_super large_file
    Default mount options: (none)
    Filesystem state: clean
    Errors behavior: Continue
    Filesystem OS type: Linux
    Inode count: 123648
    Block count: 246991
    Reserved block count: 12349
    Free blocks: 238693
    Free inodes: 123637
    First block: 0
    Block size: 4096
    Fragment size: 4096
    Reserved GDT blocks: 60
    Blocks per group: 32768
    Fragments per group: 32768
    Inodes per group: 15456
    Inode blocks per group: 483
    Filesystem created: Wed Oct 8 12:49:09 2008
    Last mount time: n/a
    Last write time: Wed Oct 8 12:52:10 2008
    Mount count: 0
    Maximum mount count: 28
    Last checked: Wed Oct 8 12:52:10 2008
    Check interval: 15552000 (6 months)
    Next check after: Mon Apr 6 12:52:10 2009
    Reserved blocks uid: 0 (user root)
    Reserved blocks gid: 0 (group root)
    First inode: 11
    Inode size: 128
    Journal inode: 8
    Default directory hash: tea
    Directory Hash Seed: 2efa124c-dde6-4046-9181-a05b7e6d182a
    Journal backup: inode blocks
    Journal size: 16M 
      Group 0: (Blocks 0-32767)
    Primary superblock at 0, Group descriptors at 1-1
    Reserved GDT blocks at 2-61
    Block bitmap at 62 (+62), Inode bitmap at 63 (+63)
    Inode table at 64-546 (+64)
    28113 free blocks, 15445 free inodes, 2 directories
    Free blocks: 4655-32767
    Free inodes: 12-15456
    Group 1: (Blocks 32768-65535)
    Backup superblock at 32768, Group descriptors at 32769-32769
    Reserved GDT blocks at 32770-32829
    Block bitmap at 32830 (+62), Inode bitmap at 32831 (+63)
    Inode table at 32832-33314 (+64)
    32221 free blocks, 15456 free inodes, 0 directories
    Free blocks: 33315-65535
    Free inodes: 15457-30912
    Group 2: (Blocks 65536-98303)
    Block bitmap at 65536 (+0), Inode bitmap at 65537 (+1)
    Inode table at 65538-66020 (+2)
    32283 free blocks, 15456 free inodes, 0 directories
    Free blocks: 66021-98303
    Free inodes: 30913-46368
    Group 3: (Blocks 98304-131071)
    Backup superblock at 98304, Group descriptors at 98305-98305
    Reserved GDT blocks at 98306-98365
    Block bitmap at 98366 (+62), Inode bitmap at 98367 (+63)
    Inode table at 98368-98850 (+64)
    32221 free blocks, 15456 free inodes, 0 directories
    Free blocks: 98851-131071
    Free inodes: 46369-61824
    Group 4: (Blocks 131072-163839)
    Block bitmap at 131072 (+0), Inode bitmap at 131073 (+1)
    Inode table at 131074-131556 (+2)
    32283 free blocks, 15456 free inodes, 0 directories
    Free blocks: 131557-163839
    Free inodes: 61825-77280
    Group 5: (Blocks 163840-196607)
    Backup superblock at 163840, Group descriptors at 163841-163841
    Reserved GDT blocks at 163842-163901
    Block bitmap at 163902 (+62), Inode bitmap at 163903 (+63)
    Inode table at 163904-164386 (+64)
    32221 free blocks, 15456 free inodes, 0 directories
    Free blocks: 164387-196607
    Free inodes: 77281-92736
    Group 6: (Blocks 196608-229375)
    Block bitmap at 196608 (+0), Inode bitmap at 196609 (+1)
    Inode table at 196610-197092 (+2)
    32283 free blocks, 15456 free inodes, 0 directories
    Free blocks: 197093-229375
    Free inodes: 92737-108192
    Group 7: (Blocks 229376-246990)
    Backup superblock at 229376, Group descriptors at 229377-229377
    Reserved GDT blocks at 229378-229437
    Block bitmap at 229438 (+62), Inode bitmap at 229439 (+63)
    Inode table at 229440-229922 (+64)
    17068 free blocks, 15456 free inodes, 0 directories
    Free blocks: 229923-246990
    Free inodes: 108193-123648
    [root@dhcp-0-175 ~]# mount /dev/sdb1 /mnt
    [root@dhcp-0-175 ~]# ls /mnt
    lost+found
    [root@dhcp-0-175 ~]#
    其实对于这种superblock破坏很严重的文件系统,其实系统已经有了很强大的修复方案:
    我们可以用mke2fs -S 来修复superblock。
    [root@dhcp-0-175 /]# mount /dev/sdb1 /mnt/
    mount: you must specify the filesystem type
    [root@dhcp-0-175 /]# mount /dev/sdb1 /mnt/ -t ext3
    mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
    missing codepage or other error
    In some cases useful info is found in syslog - try
    dmesg | tail or so

    [root@dhcp-0-175 /]# mke2fs -S /dev/sdb1
    mke2fs 1.39 (29-May-2006)
    Filesystem label=
    OS type: Linux
    Block size=1024 (log=0)
    Fragment size=1024 (log=0)
    24480 inodes, 97656 blocks
    4882 blocks (5.00%) reserved for the super user
    First data block=1
    Maximum filesystem blocks=67371008
    12 block groups
    8192 blocks per group, 8192 fragments per group
    2040 inodes per group
    Superblock backups stored on blocks:
    8193, 24577, 40961, 57345, 73729

    Writing superblocks and filesystem accounting information: done

    This filesystem will be automatically checked every 37 mounts or
    180 days, whichever comes first. Use tune2fs -c or -i to override.
    [root@dhcp-0-175 /]# mount /dev/sdb1 /mnt/
    [root@dhcp-0-175 /]# cd /mnt
    [root@dhcp-0-175 mnt]# ls
    file0 file14 file20 file27 file33 file4 file46 file52 file59 file65 file71 file78 file84 file90 file97
    file1 file15 file21 file28 file34 file40 file47 file53 file6 file66 file72 file79 file85 file91 file98
    file10 file16 file22 file29 file35 file41 file48 file54 file60 file67 file73 file8 file86 file92 file99
    file100 file17 file23 file3 file36 file42 file49 file55 file61 file68 file74 file80 file87 file93 lost+found
    file11 file18 file24 file30 file37 file43 file5 file56 file62 file69 file75 file81 file88 file94
    file12 file19 file25 file31 file38 file44 file50 file57 file63 file7 file76 file82 file89 file95
    file13 file2 file26 file32 file39 file45 file51 file58 file64 file70 file77 file83 file9 file96
    [root@dhcp-0-175 mnt]#
     
    e2fsck也可以达到同样的效果
    [root@dhcp-0-175 /]# mount /dev/sdb1 /mnt/ -t ext3
    mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
    missing codepage or other error
    In some cases useful info is found in syslog - try
    dmesg | tail or so

    [root@dhcp-0-175 /]# e2fsck /dev/sdb1
    e2fsck 1.39 (29-May-2006)
    Couldn't find ext2 superblock, trying backup blocks...
    /dev/sdb1 was not cleanly unmounted, check forced.
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    Free blocks count wrong for group #0 (3549, counted=3547).
    Fix<y>? yes

    Free blocks count wrong (88895, counted=88893).
    Fix<y>? yes

    Free inodes count wrong for group #0 (2029, counted=1929).
    Fix<y>? yes

    Free inodes count wrong (24469, counted=24369).
    Fix<y>? yes
     
    /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/sdb1: 111/24480 files (1.8% non-contiguous), 8763/97656 blocks
    [root@dhcp-0-175 /]# mount /dev/sdb1 /mnt/ -t ext3
    [root@dhcp-0-175 /]# ls /mnt
    file0 file15 file21 file28 file34 file40 file47 file53 file6 file66 file72 file79 file85 file91 file98
    file1 file16 file22 file29 file35 file41 file48 file54 file60 file67 file73 file8 file86 file92 file99
    file10 file17 file23 file3 file36 file42 file49 file55 file61 file68 file74 file80 file87 file93 lost+found
    file11 file18 file24 file30 file37 file43 file5 file56 file62 file69 file75 file81 file88 file94
    file12 file19 file25 file31 file38 file44 file50 file57 file63 file7 file76 file82 file89 file95
    file13 file2 file26 file32 file39 file45 file51 file58 file64 file70 file77 file83 file9 file96
    file14 file20 file27 file33 file4 file46 file52 file59 file65 file71 file78 file84 file90 file97
    [root@dhcp-0-175 /]# 
     
    当你的系统出现superblock corrupt而无法启动时:
    1.用应急盘启动,先看fdisk的结果.如果你的分区表看起来正常,那么恢复的可能性就比较大,如果出现cannot open /dev/sda2的提示,那么想一想你的scsi卡启动没有,如果没有,那么你可以试着用小红帽的安装光盘启动,记住,仅仅是看分区表,千万不要写它.然后把分区情况详细记录下来.
     
    2.试着e2fsck /dev/hda2,(先不要加-p -y 之类的参数,)用手动进行修复,同时也可以了解具体是文件系统的那些地方损坏了,如果你的运气好,e2fsck过去了,/dev/hda2已经基本修复,当然修复的可能是99.9%,也可能是99%这就看文件系统的损坏程度乐,不过现在可以说你的数据已经都找回来了.剩下的事就是mount上把数据备份出来以防万一.
     
    3.如果e2fsck没过去(确保你的硬盘已经正确驱动乐),也不要着急,因为superblock在硬盘中有很多地方有备份,现在你最好把硬盘卸下来挂到另一个好的linux系统上,当然同样要保证硬盘被正确驱动乐.先用e2fsck /dev/hda2,如果结果和前面一样,就用e2fsck -b xxx -f /dev/hda2, xxx是硬盘上superblock的备份块,xxx=n*8192+1,n=1,2,3...一般来讲,如果系统瘫痪的真正原因是superblock损坏,这种办法就应该可以恢复你的数据了。如果执行后的结果还是不能通过,那么往下一步.
     
    4.利用dd命令.先dd if=/dev/hda2 of=/tmp/rescue conv=noerror(/tmp/rescue是一个文件),把重要的数据拷出来,当然,这个盘要比你损坏的盘大一点,否则拷不下.另外,上面的dd命令在不同的境况下if和of应作相应的修改,写在这里只是一个例子,总之在用dd之前最好先看看man.刚才你已经看到你的分区表了,现在找一个和你的硬盘一样的硬盘,应该是一摸一样(大小,型号),在这块硬盘上按照坏盘上的分区表分区,分的区也应该是也是一模一样然后用dd命令把坏盘上superblock location后的东东全部拷到好盘的superblock location后,上帝保佑你,当你再次启动系统时就可以看到熟悉的数据了,有人用这种方法恢复了99%以上的数据,不过好在这种方法(包括前面的方法)没有动那块坏盘上的数据,如果还是没有恢复,那没你还有最后一种选择.
     
    5. 在手册页里称这种方法为last-ditch recovery method,就是说这是最后的恢复方法,只有当你已经尝试了其他的方法,都没有能恢复你的数据的情况下才用,因为这需要冒一定的风险.
    把你的硬盘挂在一台好的linux box上,运行:#mke2fs -S /dev/hda2(如果你的数据在hda2里)这条命令只重建superblock,而不碰inode表,不过这仍有一定的风险。good luck to you all.当时也有人建议我如果实在不行的话就重装系统(不动分区也不格式化),这也可能有效,但你也应该清楚这种方法就像mke2fs -S /dev/hd*一样是有风险的。

    一点建议:
    如果你的硬盘不是可以轻易就重做的,最好在建立一个新的系统后:
    1。拿出笔和纸,把你的分区信息详细记录下来.
    2. 用mkbootdisk做好现在这个系统的启动盘并测试.特别是如果你用的硬盘是scsi的。
    3. 在用mke2fs建立一个文件系统后将屏幕上的superblock所在位置记录下来。
    4. 用crontab对重要数据进行备份。ext2文件系统(包括其他的unix文件系统)是很强壮的,但你仍然应该小心。
     
     
    RedHat官方解释:
    解决方法:
    通常在作磁盘操作之前应该备份磁盘的数据,在作这个操作之前也应该把磁盘上的所有内容备份到另一个磁盘中。就是说如果这个故障盘是20g的话,就需要一个20G的备份空间。备份的命令如下:
    #dd if=/dev/baddrive of=/storagearea
    然后可以在已经卸载的故障盘上运行如下命令找到备份的superblock.
    #mke2fs -n /dev/badparition
    再运行mke2fs命令的时候需要把参数设置成为文件系统创建时所用的参数。如果当初使用的是默认值, 那就可以使用如下命令:
    #mke2fs -n -b 4000 /dev/hdb1
    可以看到有如下的输出:
    Filesystem label=
    OS type: Linux
    Block size=1024 (log=0)
    Fragment size=1024 (log=0)
    122400 inodes, 488848 blocks
    24442 blocks (5.00%) reserved for the super user
    First data block=1
    60 block groups
    8192 blocks per group, 8192 fragments per group
    2040 inodes per group
    Superblock backups stored on blocks:
                   8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
    从输出可知superblock存在于: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409



    展开全文
  • howto change MDADM UUID / superblock superblock on /dev/n
                    原贴:http://ubuntuforums.org/archive/index.php/t-410136.html

    View Full Version : howto change MDADM UUID / superblock , "superblock on /dev/nnn doesn't match others"


    djamu
    April 15th, 2007, 12:41 PM
    After switching my server from 6.06 > 7.04 ( i'm skipping 6.10 because of broken sata_uli with M5281 chipset ) to support my new hardware ( sata_mv + fixed sata_uli on 2.6.20 kernel )

    To be on the safe side I removed my RAID
    clean installation of 7.04 +complete update + mdadm install ( before attaching any raid disk )
    halt
    attached raid array
    boot

    array is raid 5 with 4 disks, previously I removed 1 disk
    array was temporary running in degraded mode ( which should have been fine )

    array refuses to assemble

    root@ubuntu:/# mdadm --assemble /dev/md0 /dev/hdg1 /dev/hdh1 /dev/sda1
    mdadm: superblock on /dev/sda1 doesn't match others - assembly aborted


    brought array back to 6.06 server, same result.... so this means :evil: 7.04 wrote something on /dev/sda1, without noticing or consulting me - it wasn't even mounted, doesn't exist in fstab, I always mount manually -

    it's similar ( yet different ) from my previous post
    http://ubuntuforums.org/showthread.php?t=405782&highlight=mdadm


    root@ubuntu:/# mdadm --examine /dev/sda1
    /dev/sda1:
    Magic : a92b4efc
    Version : 00.90.03
    UUID : 8d754c1d:5895bb70:b1e8e808:894665ea
    Creation Time : Sun Sep 10 22:51:43 2006
    Raid Level : raid5
    Device Size : 156288256 (149.05 GiB 160.04 GB)
    Array Size : 468864768 (447.14 GiB 480.12 GB)
    --------snipsnip
    -
    /dev/hdg1:
    Magic : a92b4efc
    Version : 00.90.03
    UUID : 8d754c1d:5895bb70:c89ffdee:815a6cef
    Creation Time : Sun Sep 10 22:51:43 2006
    Raid Level : raid5
    Device Size : 156288256 (149.05 GiB 160.04 GB)
    Array Size : 468864768 (447.14 GiB 480.12 GB)
    -------snipsnip
    -
    /dev/hdh1:
    Magic : a92b4efc
    Version : 00.90.03
    UUID : 8d754c1d:5895bb70:c89ffdee:815a6cef
    Creation Time : Sun Sep 10 22:51:43 2006
    Raid Level : raid5
    Device Size : 156288256 (149.05 GiB 160.04 GB)
    Array Size : 468864768 (447.14 GiB 480.12 GB)

    root@ubuntu:/# mdadm --assemble /dev/md0 /dev/hdg1 /dev/hdh1 /dev/sda1
    mdadm: superblock on /dev/sda1 doesn't match others - assembly aborted


    while the hdh1 and hdg1 drive are ok, the UUID / superblock for sda ( and the removed sdb ) changed.

    How do I fix this ?

    guess here goes my sunday afternoon.

    Thanks alot !

    djamu
    April 24th, 2007, 05:21 PM
    OK. resolved this,

    Don't know ( yet ) how this happened, but seems like you can re-create your array
    ( tested this first with dummy arrays on vmware )
    I used one missing device when defining the array to make sure it didn't start (possibly wrong ) resyncing.
    It's very important to define the EXACT sequence your array previously was in ( make sense as it then finds it's spare blocks where they where before ),
    do:
    mdadm -E /dev/sda1 ( or whatever device + partition your using ) and this for any device in the array

    root@feisty-server:/# mdadm -E /dev/sda1

    ----snip

    Number Major Minor RaidDevice State
    this 0 8 1 0 active sync /dev/sda1

    0 0 8 1 0 active sync /dev/sda1
    1 1 0 0 1 faulty removed
    2 2 22 1 2 active sync /dev/hdc1
    3 3 22 65 3 active sync /dev/hdd1


    and then do

    mdadm --create /dev/md0 --assume-clean --level=5 --raid-devices=4 /dev/sda1 missing /dev/hdc1 /dev/hdd1


    it gave me some info about the previous state of the array, and asked me if I really wanted to continue > Yes

    mounted it & voila it worked again :)

    ( tested this on vmware without the "--assume-clean" flag, and even with --zero-superblock, but I didn't dare to do this with my real array - guess that should be ok-

    if you give the wrong sequence >

    /dev/sda1 missing /dev/hdd1 /dev/hdc1
    or
    /dev/sda1 /dev/hdc1 /dev/hdd1 missing


    the array will still be created but will refuse to mount, DO NOT RUN FSCK on the MD device as it will definetly kill your data. Just try again with another sequence
    ( the underlying physical device blocks have actually nothing to do with the actual filesystem of the MD device )
    As I was studying the subject, I noticed that there's a lot of confusion regarding the MD superblocks, just keep in mind that both the physical device (as part of the array ) & the MD device ( the complete array with filesystem ) have superblocks....

    hope it helps someone.

    Gruelius
    April 29th, 2007, 04:56 AM
    That worked for me however after i rebooted mdadm tells me that it cant find any of the devices.

    julius@tuxserver:~$ sudo mdadm --assemble /dev/md0
    mdadm: no devices found for /dev/md0


    julius@tuxserver:~$ sudo mdadm -E /dev/hdb/dev/hdb:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : daf47178:4eba9cde:1ed6dcb2:94163062
    Creation Time : Sun Apr 29 15:55:56 2007
    Raid Level : raid5
    Device Size : 195360896 (186.31 GiB 200.05 GB)
    Array Size : 781443584 (745.24 GiB 800.20 GB)
    Raid Devices : 5
    Total Devices : 5
    Preferred Minor : 0

    Update Time : Sun Apr 29 18:31:17 2007
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 0
    Spare Devices : 0
    Checksum : 1dd6f07c - correct
    Events : 0.12

    Layout : left-symmetric
    Chunk Size : 128K

    Number Major Minor RaidDevice State
    this 0 3 64 0 active sync /dev/hdb

    0 0 3 64 0 active sync /dev/hdb
    1 1 33 0 1 active sync /dev/hde
    2 2 33 64 2 active sync /dev/hdf
    3 3 34 0 3 active sync /dev/hdg
    4 4 34 64 4 active sync /dev/hdh


    and then

    julius@tuxserver:~$ sudo mdadm --assemble /dev/md0 /dev/hdb /dev/hde /dev/hdf /dev/hdg /dev/hdh
    mdadm: superblock on /dev/hde doesn't match others - assembly aborted



    ..sigh.. any ideas?

    djamu
    April 30th, 2007, 08:29 AM
    That worked for me however after i rebooted mdadm tells me that it cant find any of the devices.

    julius@tuxserver:~$ sudo mdadm --assemble /dev/md0
    mdadm: no devices found for /dev/md0


    julius@tuxserver:~$ sudo mdadm -E /dev/hdb/dev/hdb:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : daf47178:4eba9cde:1ed6dcb2:94163062
    Creation Time : Sun Apr 29 15:55:56 2007
    Raid Level : raid5
    Device Size : 195360896 (186.31 GiB 200.05 GB)
    Array Size : 781443584 (745.24 GiB 800.20 GB)
    Raid Devices : 5
    Total Devices : 5
    Preferred Minor : 0

    Update Time : Sun Apr 29 18:31:17 2007
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 0
    Spare Devices : 0
    Checksum : 1dd6f07c - correct
    Events : 0.12

    Layout : left-symmetric
    Chunk Size : 128K

    Number Major Minor RaidDevice State
    this 0 3 64 0 active sync /dev/hdb

    0 0 3 64 0 active sync /dev/hdb
    1 1 33 0 1 active sync /dev/hde
    2 2 33 64 2 active sync /dev/hdf
    3 3 34 0 3 active sync /dev/hdg
    4 4 34 64 4 active sync /dev/hdh


    and then

    julius@tuxserver:~$ sudo mdadm --assemble /dev/md0 /dev/hdb /dev/hde /dev/hdf /dev/hdg /dev/hdh
    mdadm: superblock on /dev/hde doesn't match others - assembly aborted



    ..sigh.. any ideas?

    sure,

    sidenote not really relevant but worth the info:
    > it seems that your not using partitions ( not that it matters much ) but since there's no partition table, other OS's ( M$ ) might write something ( initialize ) on it destroying at least a couple of sectors -as I said not really relevant if those disks never see windows, but still good practice to use partitions-

    sudo mdadm -E /dev/hdb/dev/hdb
    on the beginning is probably a typo right?

    just do this for any hd ( and post )
    it's very probable that some drive letters changed name ( ex. /dev/hdf that became /dev/hdi )
    assembling them with the old drive names won't work in that case.
    ( got an earlier post about that here
    http://ubuntuforums.org/showthread.php?t=405782
    This happened after I inserted a removable drive + reboot while it was inserted.
    It didn't happen again since then ( did you recently do a system upgrade, if yes chances are that MDADM got upgraded to ..... )

    an example:
    note: examining /dev/hdc1 gives a result for /dev/hdg1
    I've put it in red. Read this as follows " The device /dev/hdc1 formerly known as /dev/hdg1 "


    root@ubuntu:/# mdadm -E /dev/hdc1
    /dev/hdc1:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 23d10d44:d59ed967:79f65471:854ffca8
    Creation Time : Mon Apr 9 21:21:13 2007
    Raid Level : raid0
    Device Size : 0
    Raid Devices : 2
    Total Devices : 2
    Preferred Minor : 1

    Update Time : Mon Apr 9 21:21:13 2007
    State : active
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0
    Checksum : 2e17ac36 - correct
    Events : 0.1

    Chunk Size : 64K

    Number Major Minor RaidDevice State
    this 0 34 1 0 active sync /dev/hdg1

    0 0 34 1 0 active sync /dev/hdg1
    1 1 34 65 1 active sync /dev/hdh1



    To make a long story short. Do a #mdadm -E for all devices, write down their current & old names & make a table [ raid device nr. / new name / old name ]

    - if all UUIDs ( of the physical device !, not the MD device ) match, just assemble it using the new names ( use the raid device nr. to define the correct sequence )

    - if UUID's differ, recreate using new names & raid device nr for correct sequence ( use 1 missing, so your array doesn't start resyncing & possibly wiping a wrong assembled array, if everything is fine and your able to mount it, you can re-add the missing disk )

    If in doubt just ask again, I'll give you the correct command

    good luck

    DannyW
    May 12th, 2007, 11:59 AM
    Hello. Nice how to, but unfortunately the UUID's are messed up again after rebooting.

    I posted a thread earlier today before I found yours:
    http://ubuntuforums.org/showthread.php?t=441040

    Basically, I want to set up a raid5 array consisting of sda1(500GB), sdc1(500GB) and md0(raid0:200GB300GB).

    The raid0 array (/dev/md0) created fine. When creating the raid5 array (/dev/md1) it initally started as a degraded, rebuilding array, which is normal for raid5. After a couple of hours when this had finished the raid5 array looked normal.

    After rebooting it initialised with just 2 drives. I could fix this by mdadm --manage /dev/md1 --add /dev/md0, then after a couple of hours of rebuilding it was fine again. Until the reboot.

    After reading this thread I noticed the UUID's given by mdadm --misc -E /dev/sda1 /dev/sdc1 /dev/md0 were not all the same. md0 was different; it was the same as the UUID's of the devices of the raid0 array, /dev/md0.

    So I used your method. The /dev/md1 was mountable with /dev/md0 missing, and so I knew it was safe to add /dev/md0 to the array. I did this and then after a couple of hours the array looked fine.

    I rebooted, and now sda1 and md1 share the UUID of the devices in the raid0 array, and sdc1 has a unique UUID.

    I hope I have explained this clearly enough.

    Any help would be greatly appreciated!

    Thank you.

    danny@danny-desktop:~$ cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #

    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    DEVICE partitions

    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes

    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>

    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root

    # definitions of existing MD arrays
    ARRAY /dev/md0 level=raid0 num-devices=2 UUID=2ef71727:a367450b:4f12a4b2:e95043a1
    ARRAY /dev/md1 level=raid5 num-devices=3 UUID=4c4b144b:ae4d69bc:355a5a07:0f3721ab


    # This file was auto-generated on Fri, 11 May 2007 18:08:52 +0100
    # by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $

    danny@danny-desktop:~$ cat /etc/fstab
    # /etc/fstab: static file system information.
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    # /dev/sda3
    UUID=99873af1-7d82-476c-975e-7165fedb7cee / ext3 defaults,errors=remount-ro 0 1
    # /dev/sda1
    UUID=56703a2b-449e-4c73-b937-41e5de341a0d /boot ext3 defaults 0 2
    /dev/mapper/vghome-lvhome /home reiserfs defaults,nodev,nosuid 0 2
    # /dev/sda2
    UUID=801702de-257d-481b-b0de-9ad2108893da none swap sw 0 0
    /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    proc /proc proc defaults 0 0


    Note: I noticed during boot there was a message:
    "mdadm: no devices listed in conf file were found"

    Also, I'm using LVM2, though I doubt this is related. /dev/md1 is the only PhysicalVolume in my LVM VolumeGroup.

    djamu
    May 12th, 2007, 03:45 PM
    Basically, I want to set up a raid5 array consisting of sda1(500GB), sdc1(500GB) and md0(raid0:200GB300GB).


    mmm, interesting but according to me doomed to work consistently.
    Why ?
    For MD1 to work MD0 has to be completely assembled first, otherwise MD1 will start with 1 device missing ( MD0 )
    Since you won't benefit ( speed wise ) from that stripe ( remember RAID5 is also a stripe, actually RAID0 + parity ) because your actual speed depends on the slowest device. ( like an internet connection ).

    Guess the idea was to have raid 5 with 3 * 500 gb devices
    Because of the speed issue I mentioned before. It would work equally good with a linear raid.
    which does the same thing as a LVM / EVMS volume.... I prefer using EVMS whenever I can, since this has a lot more options - if your using a desktop ( Gnome ? ) there is a nice GUI in the repository.

    In your case you'll have to stick with LVM because EVMS doesn't have kernel support ( yet )- I might be wrong on this one regarding the new feisty kernel -

    So instead of using a raid0 for the 2 drives ( 200 + 300 ) use LVM to built a 500 gb device and use that one for your RAID5

    A workaround would be to remove your MD1 device from your mdadm.conf to make sure MD0 gets assembled properly before assembling MD1 manually ( script this :) ) ...... not very handy, but doable...


    After rebooting it initialised with just 2 drives. I could fix this by mdadm --manage /dev/md1 --add /dev/md0, then after a couple of hours of rebuilding it was fine again. Until the reboot.

    So I used your method. The /dev/md1 was mountable with /dev/md0 missing, and so I knew it was safe to add /dev/md0 to the array. I did this and then after a couple of hours the array looked fine.

    I rebooted, and now sda1 and md1 share the UUID of the devices in the raid0 array, and sdc1 has a unique UUID.


    What method ? :) , like I said before there's no way to tell what device gets assembled first. use LVM for your third RAID5 device.


    Note: I noticed during boot there was a message:
    "mdadm: no devices listed in conf file were found"


    Just ignore this. ( your on Feisty right ? ) It has something to do with the new MDADM version.
    For your info - & contrary to what the manual says - you don't need mdadm.conf ...
    ( actually it depends on the mode mdadm is running ), your arrays will be assembled as soon as the mdadm kernel module detects them.
    None of my servers has a mdadm.conf ( allthough I must tell you, that those only run dapper & edgy ),
    nor does the arrays appear in fstab because I prefer to manually ( some cron scripts ) mount them.
    I got a feisty Desktop ( with raid ) which has a automatically generated mdadm.conf, didn't check if you can delete this. Dapper / Edgy & Feisty use all different versions of MDADM ( dapper v 1.xx -got to check this- edgy & feisty v 2.xx )


    Also, I'm using LVM2, though I doubt this is related. /dev/md1 is the only PhysicalVolume in my LVM VolumeGroup.


    huh ? mmmm... maybe you better post following, ... you made a LVM volume out of the RAID volume.
    use EVMS instead. way more options, and since it doesn't run ( yet ) at boottime....


    fdisk -l

    cat /proc/mdstat




    ( I'll have to check some things, so expect this reply to get altered )

    cheers

    Jan

    DannyW
    May 13th, 2007, 06:11 AM
    Wow! Thank you very much for the informative and speedy reply!

    The only reason I used LVM on the raid5 was so that I could extend the volume if I ever needed to. But I see this can be done easily with mdadm, which makes more sense. I just wasn't sure if it was possible to remove LVM whilst keeping my data safe, so just left it in place.

    Using LVM for the raid0 makes perfect sense, and I assume this can be done without losing data, as the raid 5 can run degraded whilst I set up the 3rd device.

    Would there then be issues with which starts first, LVM or mdadm?

    Some good news is, I again used information from your earlier post to stop the array and recreate it and, for now, all is working ok.

    I stopped the arrays (this time both of them, previously I only did the raid5), zeroed the super blocks and recreated with the same device order.

    mdadm --manage /dev/md1 --stop
    mdadm --manage /dev/md0 --stop
    mdadm --zero-superblock /dev/sdb1
    mdadm --zero-superblock /dev/sdd1
    mdadm --zero-superblock /dev/sda1
    mdadm --zero-superblock /dev/sdc1
    mdadm --zero-superblock /dev/md0
    mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdd1
    mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sda1 /dev/sdc1 /dev/md0

    After the couple hour rebuild things are ok, I haven't rebooted many times since I completed this step, so I'm not very confident about it holding up.

    Meanwhile, I shall do some research on EVMS, to see what it can offer me.

    I can switch md0 over to LVM easily, but I don't know if I can remove LVM, keeping my data in place, I'll try to find some info on this too.

    Once again, thank you for that great response! I may be back in touch very soon when it breaks ;)

    Kind regards,
    Danny

    djamu
    May 13th, 2007, 06:48 AM
    np.
    It won't break that easy, as long as it runs. :lol:.

    Moreover, one of the problems I had lately was that my linux still recognized a reiserFS on a full ( not quick ! ) formatted NTFS ( which had reiserFS before ).
    Needed to zero "dd" the first GB's ( didn't have the patience to do a full zero dd )
    http://ubuntuforums.org/showthread.php?t=422549
    It's actually quite assuring these things are so persistent :-/"

    Just make sure you post your results, others might benefit from it to.


    Jan

                

    再分享一下我老师大神的人工智能教程吧。零基础!通俗易懂!风趣幽默!还带黄段子!希望你也加入到我们人工智能的队伍中来!https://blog.csdn.net/jiangjunshow

    展开全文
  • superblock 修复

    2012-06-29 11:31:29
    Howto Recover a Linux Partition after a Superblock corruption? A SuperBlock in Linux saves inform...

    Howto Recover a Linux Partition after a Superblock corruption?

    A SuperBlock in Linux saves information about the File System like, the File System type, size, status etc. A File system cannot be mounted if a Superblock is corrupted.  Corruption of superblock can occur due to various reasons like, abnormal shutdown due to power failure, virus infection, file system corruption etc.

    When a Superblock is corrupted, you receive a ” can’t read superblock” error message while accessing the File System. For example, if you try to access a Linux ext3 partition say, /dev/sda3, you will receive the following message:

    /dev/sda3: Input/output error
    mount: /dev/sda3: can’t read superblock

    Linux ext3 file system automatically maintains a backup of superblock at various locations. In cases such as these, you have to restore a superblock from an alternate  backup location to retrieve the data.

    Note: You should unmount the partition before performing this task.

    First, find / list the superblock locations of the file system /dev/sda3 (we are using sda3 as an example, your partition may be different)

    # dumpe2fs /dev/sda3 | grep superblock
     dumpe2fs 1.39 (29-May-2006)
     Primary superblock at 1, Group descriptors at 2-2
     Backup superblock at 8193, Group descriptors at 8194-8194
     Backup superblock at 24577, Group descriptors at 24578-24578
     Backup superblock at 40961, Group descriptors at 40962-40962
     Backup superblock at 57345, Group descriptors at 57346-57346
     Backup superblock at 73729, Group descriptors at 73730-73730

    Now, check and repair (fsck) the file system with an alternate superblock #24577. BTW, try superblock from another location if one doesn’t work.

    # fsck -b 24577 /dev/sda3
     fsck 1.39 (29-May-2006)
     e2fsck 1.39 (29-May-2006)
     /dev/sda3 was not cleanly unmounted, check forced.
     Pass 1: Checking inodes, blocks, and sizes
     Pass 2: Checking directory structure
     Pass 3: Checking directory connectivity
     Pass 4: Checking reference counts
     Pass 5: Checking group summary information
     Free blocks count wrong for group #0 (3553, counted=513).
     Fix? yes
    Free blocks count wrong for group #1 (7681, counted=5059).
     Fix? yes
    Free blocks count wrong for group #19 (7939, counted=7697).
     Fix? yes
    /boot: ***** FILE SYSTEM WAS MODIFIED *****
     /boot: 35/50200 files (8.6% non-contiguous), 17906/200780 blocks

    Now, mount the partition once the file system check is over:

    # mount /dev/sda3 /mnt

    Once the partition is mounted, you can retrieve the files from /mnt:

    # mkdir backup
    # cd /mnt
    # cp filename /backup/

    BTW, it is always good to keep a backup of your data instead of finding yourself in such situations.

    来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/15747463/viewspace-734119/,如需转载,请注明出处,否则将追究法律责任。

    转载于:http://blog.itpub.net/15747463/viewspace-734119/

    展开全文
  • ubifs - superblock

    2015-01-14 08:15:00
    ubifs superblock - 1 superblock node. - store system data - superblock node be rewritten only if an automatic resize occurs. (image size resizing) ubi对LEB的操作是原子操作,以保证数据更新成功。 ...
  • [AV1] Superblock and Block

    2020-10-18 17:00:59
    返回目录 Superblock and Block
  • SA - Superblock Definition

    2019-09-30 19:51:45
    FromURL:http://www.linfo.org A superblock is a record of the characteristics of a filesystem, including its size, the block size, the empty and the filled blocks and their respective counts, ...
  • ubifs- superblock

    千次阅读 2013-05-10 08:54:01
    The first LEB is not LEB one, it is LEB zero. LEB zero stores the superblock node. The superblock node contains file system parameters that change rarely if at all. For example, the flash geometry (e
  • SuperBlock损坏修复 什么是superblock? 可以参考下这个网站:http://homepage.smc.edu/morgan_david/cs40/analyze-ext2.htm 详细的介绍superblock组成 构成   大家可能遇到过这样的情...
  • 用dd修复ext3的superblock

    2020-01-29 00:01:02
    一个磁盘分区上的ext3 ...linux-d4xo:~ # dumpe2fs /dev/sdc1 | grep -i superblock dumpe2fs 1.42.11 (09-Jul-2014) Primary superblock at 0, Group descriptors at 1-1 Backup superblock at 32768, Group descri...
  • UBIFS 磁盘结构 之 superblock

    千次阅读 2015-08-14 16:11:25
    上次对UBIFS做了简单的... 首先,每个文件系统都有一个superblock,在UBIFS中,也不例外。那么在UBIFS中,superblock会 放在磁盘的那个位置呢?没有错,就是第一个leb(logic erase block)。  其次,那么ubifs 的sup
  • 在linux中利用nfs挂载硬盘时发现提示wrong fs type, bad option, bad superblock错误了,在网上搜索了一下说是没有安装 mount.nfs了,我们只要安装mount.nfs就不会有wrong fs type, bad option, bad superblock错误...
  • superblock的理解

    千次阅读 2016-10-25 21:23:06
    superblock的理解首先从partition structure的结构开始: 1、首先了解下block,对于ext2(ext3)文件系统而言,硬盘分区首先被划分为一个个的block,同一个ext2文件系统上的每个block大小都是一样的。但是对于...
  • EXT4 之superblock 结构体剖析

    千次阅读 2017-07-06 19:11:46
    android ext4 文件系统superblock信息剖析
  • 恢复ext4文件系统superblock 1. Create ext4 文件系统。 [root@localhost ~]# mkfs.ext4 /dev/vdb1 [root@localhost ~]# partprobe /dev/vdb 2. 挂载文件系统 [root@localhost ~]# grep vdb2 /etc/fstab /...
  • 现象: $ adb root restarting adbd as ...remount of the / superblock failed: Permission denied remount failed   原因: Android P 版本后 google 启用 avb(Android Verified Boot)2.0,verified boo...
  • 外接设备superblock问题解决方案

    千次阅读 2018-06-28 10:18:32
    can't read superblock1.查询device自动备份的节点mkfs.ext4 -n device ####/dev/sda12.恢复节点mkfs.ext4 -b superblocknum device ###/dev/sda1
  • bad superblock on /dev/sda2

    2021-07-03 11:22:54
    mount: /home/pi/disk/disk2: wrong fs type, bad option, bad superblock on /dev/sda2, missing codepage or helper program, or other error. 解决 i@raspberrypi:~ $ sudo e2fsck /dev/sda2 e2fsck
  • 将编译的模块通过"adb push"到/system文件夹中,须执行"adb remount",但是Android9.0的会出现失败提示"remount of the / superblock failed: Permission denied" 报错 C:\Users\Admin>adb root restarting ...
  • superblock的理解首先从partition structure的结构开始: 1、首先了解下block,对于ext2(ext3)文件系统而言,硬盘分区首先被划分为一个个的block,同一个ext2文件系统上的每个block大小都是一样的。但是对于...
  • superblock定义:顾名思义超级块,主要是用来存放当前文件系统的一些全局信息,包括但不限于:inode个数,block个数,mount时间、mount节点、journal信息等superblock数据: 图1 superblock 数据和结构体使用dumpe2...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 16,195
精华内容 6,478
关键字:

Superblock