Linux, FreeBSD, Juniper, Cisco / Network security articles and troubleshooting guides

FAQ
It is currently Tue Jun 27, 2017 9:01 pm


Discussions on RAM, DRAM, shared memory, NAS, SAN, RAID, backups, SCSI, IDE, UFS, UFS2, JFS, JFS2, EXT2, EXT3

Author Message
mandrei99
Post  Post subject: Restoring data from a Seagate 2BAY NAS with lost configuration  |  Posted: Tue Jun 14, 2016 10:51 am

Joined: Tue Aug 04, 2009 9:16 am
Posts: 245

Offline
 

Restoring data from a Seagate 2BAY NAS with lost configuration

There are multiple RAID solutions out there and I prefer a server with hardware raid, but for convenience I recommend friends to use a commercial solution like the Seagate 2Bay NAS enclosure.

Interesting about this NAS is that it is using Linux RAID and sometimes the RAID container fails (configuration lost due to hard reset or other reasons that Seagate has failed to document).

In such cases, the storage array requests that it be reconfigured with the risk of loosing all the information contained.

This article shows how to retrieve the data from a Seagate NAS (2-Bay model in this case) before reconfiguring the NAS.

Once the drive is removed from the Seagate NAS and added to a Debian Linux server (or VM in Vmware as a Raw Device), it will be detected by Linux as GPT partition:
Code:
root@debian:~# parted -l
...
Model: ATA ST2000DM001-1CH1 (scsi)
Disk /dev/sdc: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
1      21.0kB  1000kB  979kB                CLAIM
2      1049kB  1000MB  999MB                RFS1
3      1000MB  2000MB  999MB                RFS2
4      2000MB  2010MB  10.5MB               CONF
5      2010MB  3010MB  1000MB               SWAP
6      3010MB  3021MB  10.5MB               KERN1
7      3021MB  3032MB  11.5MB               KERN2
8      3032MB  3330MB  298MB                CRFS
9      3330MB  4354MB  1023MB               FWUPGR
10      4354MB  2000GB  1996GB               primary



We see a number of blocks enclosed inside the GPT disk. Now let's see each block type:
Code:
root@debian:~# blkid
/dev/sda5: UUID="95417c79-c2e5-4a21-8a52-093a8a136c17" TYPE="swap"
/dev/sda1: UUID="f637d6fb-8c42-4d60-97d9-e7c16b441957" TYPE="ext4"
/dev/sdc2: UUID="b17ddcc8-4841-4c14-6102-9952ee17c603" UUID_SUB="b96fc571-63bb-2c55-0d10-a622ed8250e4" LABEL="(none):0" TYPE="linux_raid_member"
/dev/sdc3: UUID="0f8a460f-75a9-d311-68b5-9d850d2ddcd7" UUID_SUB="501a2f98-b5e1-3074-9b13-7c883aa53b03" LABEL="(none):1" TYPE="linux_raid_member"
/dev/sdc4: UUID="dd107fc1-3e78-b3db-6f94-bc1879a2d9ac" UUID_SUB="b8c283f0-f267-ddda-61ce-14ad4f41014f" LABEL="(none):2" TYPE="linux_raid_member"
/dev/sdc5: UUID="90057076-b2df-63ea-59ab-8c95a3d2f2d7" UUID_SUB="ec2f664c-c1eb-0bf3-3927-74f57f2cc9c7" LABEL="(none):3" TYPE="linux_raid_member"
/dev/sdc6: UUID="85726f0c-484d-174b-72c4-f87850327f58" UUID_SUB="9391b355-1e6c-09f9-dd61-b9bea2d29376" LABEL="(none):4" TYPE="linux_raid_member"
/dev/sdc7: UUID="02b6c204-4985-c2e2-d55a-cc0137415082" UUID_SUB="64c3b6d2-7276-0c68-2eab-64a0a089e84b" LABEL="(none):5" TYPE="linux_raid_member"
/dev/sdc8: UUID="d346a3ef-0154-9bc8-4fa6-80bbe4189a9f" UUID_SUB="1bc9f982-6010-e5c5-d04f-1ec6dafb4b89" LABEL="(none):6" TYPE="linux_raid_member"
/dev/sdc9: UUID="e2bd6e26-3b73-be7b-8c8d-d8320a471ee2" UUID_SUB="dfa9316f-5a8e-3a45-b241-f3d20c22fdfd" LABEL="(none):7" TYPE="linux_raid_member"
/dev/sdc10: UUID="a0fad42e-78a7-49ea-448e-a0a6b5821d8c" UUID_SUB="f510c691-dd80-b25f-a88f-b63f1c5bcbb3" LABEL="BA-00107538E644:8" TYPE="linux_raid_member"
/dev/sr0: LABEL="Debian 7.4.0 amd64 1" TYPE="iso9660"
/dev/sdb: UUID="1c440abc-09e2-4c07-8fa3-7df8e481291a" TYPE="ext4"


All the blocks are Linux RAID types. We use mdadm to scan these RAID containers (if not installed: #apt-get install mdadm).

Code:
root@debian:~# /etc/init.d/mdadm start
root@debian:~# mdadm -A -s
mdadm: /dev/md/0 has been started with 1 drive (out of 4).
mdadm: /dev/md/1 has been started with 1 drive (out of 4).
mdadm: /dev/md/2 has been started with 1 drive (out of 4).
mdadm: /dev/md/3 has been started with 1 drive (out of 4).
mdadm: /dev/md/4 has been started with 1 drive (out of 4).
mdadm: /dev/md/5 has been started with 1 drive (out of 4).
mdadm: /dev/md/6 has been started with 1 drive (out of 4).
mdadm: /dev/md/7 has been started with 1 drive (out of 4).
mdadm: /dev/md/8 has been started with 1 drive (out of 2).

root@debian:~# cat /proc/mdstat
Personalities : [raid1]
md8 : active raid1 sdc10[1]
      1949260819 blocks super 1.2 [2/1] [_U]

md7 : active raid1 sdc9[1]
      999412 blocks super 1.2 [4/1] [_U__]

md6 : active raid1 sdc8[1]
      290804 blocks super 1.2 [4/1] [_U__]

md5 : active raid1 sdc7[1]
      11252 blocks super 1.2 [4/1] [_U__]

md4 : active raid1 sdc6[1]
      10228 blocks super 1.2 [4/1] [_U__]

md3 : active raid1 sdc5[1]
      976884 blocks super 1.2 [4/1] [_U__]

md2 : active raid1 sdc4[1]
      10228 blocks super 1.2 [4/1] [_U__]

md1 : active raid1 sdc3[1]
      975860 blocks super 1.2 [4/1] [_U__]

md0 : active raid1 sdc2[1]
      975860 blocks super 1.2 [4/1] [_U__]

unused devices: <none>


Reading again all blocks, we see something interesting: One of the containers is actually a Logical Volume Manager sub-container (an LVM Physical Volume):
Code:
root@debian:~# blkid
/dev/sda5: UUID="95417c79-c2e5-4a21-8a52-093a8a136c17" TYPE="swap"
/dev/sda1: UUID="f637d6fb-8c42-4d60-97d9-e7c16b441957" TYPE="ext4"
/dev/sdc2: UUID="b17ddcc8-4841-4c14-6102-9952ee17c603" UUID_SUB="b96fc571-63bb-2c55-0d10-a622ed8250e4" LABEL="(none):0" TYPE="linux_raid_member"
/dev/sdc3: UUID="0f8a460f-75a9-d311-68b5-9d850d2ddcd7" UUID_SUB="501a2f98-b5e1-3074-9b13-7c883aa53b03" LABEL="(none):1" TYPE="linux_raid_member"
/dev/sdc4: UUID="dd107fc1-3e78-b3db-6f94-bc1879a2d9ac" UUID_SUB="b8c283f0-f267-ddda-61ce-14ad4f41014f" LABEL="(none):2" TYPE="linux_raid_member"
/dev/sdc5: UUID="90057076-b2df-63ea-59ab-8c95a3d2f2d7" UUID_SUB="ec2f664c-c1eb-0bf3-3927-74f57f2cc9c7" LABEL="(none):3" TYPE="linux_raid_member"
/dev/sdc6: UUID="85726f0c-484d-174b-72c4-f87850327f58" UUID_SUB="9391b355-1e6c-09f9-dd61-b9bea2d29376" LABEL="(none):4" TYPE="linux_raid_member"
/dev/sdc7: UUID="02b6c204-4985-c2e2-d55a-cc0137415082" UUID_SUB="64c3b6d2-7276-0c68-2eab-64a0a089e84b" LABEL="(none):5" TYPE="linux_raid_member"
/dev/sdc8: UUID="d346a3ef-0154-9bc8-4fa6-80bbe4189a9f" UUID_SUB="1bc9f982-6010-e5c5-d04f-1ec6dafb4b89" LABEL="(none):6" TYPE="linux_raid_member"
/dev/sdc9: UUID="e2bd6e26-3b73-be7b-8c8d-d8320a471ee2" UUID_SUB="dfa9316f-5a8e-3a45-b241-f3d20c22fdfd" LABEL="(none):7" TYPE="linux_raid_member"
/dev/sdc10: UUID="a0fad42e-78a7-49ea-448e-a0a6b5821d8c" UUID_SUB="f510c691-dd80-b25f-a88f-b63f1c5bcbb3" LABEL="BA-00107538E644:8" TYPE="linux_raid_member"
/dev/sr0: LABEL="Debian 7.4.0 amd64 1" TYPE="iso9660"
/dev/sdb: UUID="1c440abc-09e2-4c07-8fa3-7df8e481291a" TYPE="ext4"
/dev/md0: UUID="2e6aaf8c-96c9-4abd-8ae1-905a555354a8" TYPE="ext4"
/dev/md1: UUID="aa1e0952-dd36-4a53-aeca-fde7e0b540d0" TYPE="ext4"
/dev/md2: UUID="c05422c1-5689-4e33-83ae-7b24289a1af3" TYPE="ext4"
/dev/md3: TYPE="swap"
/dev/md4: UUID="d1201497-ca5a-4b4b-950e-e1867daeecba" TYPE="ext4"
/dev/md5: UUID="940adde8-33ee-400b-bd4e-f5ad9a0ad8ed" TYPE="ext4"
/dev/md6: UUID="d962443a-a8a0-43e0-b426-969915b9aabc" TYPE="ext4"
/dev/md7: UUID="93bf1b48-339b-4f92-aa21-7288fcd76928" TYPE="ext4"
/dev/md8: UUID="DYtU24-LlHQ-OS7Y-utZJ-Vw2W-0300-v3PzJk" TYPE="LVM2_member"


Let's confirm:
Code:
root@debian:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md8
  VG Name               vg8
  PV Size               1.82 TiB / not usable 3.02 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              475893
  Free PE               0
  Allocated PE          475893
  PV UUID               DYtU24-LlHQ-OS7Y-utZJ-Vw2W-0300-v3PzJk

root@debian:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg8/lv8
  LV Name                lv8
  VG Name                vg8
  LV UUID                3L8WzB-vJvV-lKKj-KBNs-2wE8-2xa0-3rr1iP
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                1.82 TiB
  Current LE             475893
  Segments               1
  Allocation             inherit
  Read ahead sectors     8192

root@debian:~# lvscan
  inactive          '/dev/vg8/lv8' [1.82 TiB] inherit
root@debian:~# pvs
pvs     pvscan
root@debian:~# pvs
pvs     pvscan
root@debian:~# pvscan
  PV /dev/md8   VG vg8   lvm2 [1.82 TiB / 0    free]
  Total: 1 [1.82 TiB] / in use: 1 [1.82 TiB] / in no VG: 0 [0   ]


So /dev/md8 contains an LVM Physical Disk used by lv8 Logical Volume belonging to vg8 volume group. The LV is inactive, however.
Code:
root@debian:~# vgchange -ay
  1 logical volume(s) in volume group "vg8" now active
root@debian:~# lvscan
  ACTIVE            '/dev/vg8/lv8' [1.82 TiB] inherit


After activating the LV, we are ready to mount it, but there is a small problem:
Code:
root@debian:~# mount /dev/vg8/lv8 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg8-lv8,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

root@debian:~# dmesg | tail -2
[ 1105.934090] end_request: I/O error, dev fd0, sector 0
[ 1451.754875] EXT4-fs (dm-0): bad block size 65536


Aparently, Seagate uses an EXT4 filesystem with a large block size (not standard 4K) and the kernel will not be able to mount it:
Code:
root@debian:~# dumpe2fs -h /dev/vg8/lv8 | grep size
dumpe2fs 1.42.5 (29-Jul-2012)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Block size:               65536
Fragment size:            65536
Flex block group size:    16
Inode size:             256
Required extra isize:     28
Desired extra isize:      28
Journal size:             2048M


In order to mount an EXT4 with a 65K superblock, will need to use fuse:
Code:
root@debian:~# fuseext2 -o ro -o sync_read /dev/vg8/lv8 /mnt/
fuse-umfuse-ext2: version:'0.4', fuse_version:'29' [main (fuse-ext2.c:331)]
fuse-umfuse-ext2: enter [do_probe (do_probe.c:30)]
fuse-umfuse-ext2: leave [do_probe (do_probe.c:55)]
fuse-umfuse-ext2: opts.device: /dev/vg8/lv8 [main (fuse-ext2.c:358)]
fuse-umfuse-ext2: opts.mnt_point: /mnt/ [main (fuse-ext2.c:359)]
fuse-umfuse-ext2: opts.volname:  [main (fuse-ext2.c:360)]
fuse-umfuse-ext2: opts.options: ro,sync_read [main (fuse-ext2.c:361)]
fuse-umfuse-ext2: parsed_options: sync_read,ro,fsname=/dev/vg8/lv8 [main (fuse-ext2.c:362)]
fuse-umfuse-ext2: mounting read-only [main (fuse-ext2.c:378)]


Now, the data contained on the Seagate NAS is available for restoring to a safe place while the NAS is re-configured.

Note: Filesystem should be read as read-only as there is no reason to write on it and I don't trust fuse too much.

Credits go to various internet forums.





Top
Display posts from previous:  Sort by  
E-mail friendPrint view
Who is online
Users browsing this forum: No registered users and 0 guests
You can post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum
Jump to:  
News News Site map Site map SitemapIndex SitemapIndex RSS Feed RSS Feed Channel list Channel list


Delete all board cookies | The team | All times are UTC - 5 hours [ DST ]



phpBB SEO