Synology blue led of death (led blinking)
I found the solution this way.
I removed ALL the disks an formated one of the disk (you'd better use a new one. I had a backup so didn't take much risk doing this) using
- parted
- mklabel gpt
- write
- quit
I inserted this only disk in the last slot of my synology server. and rebooted it. At this moment synology assistant was able to install a new dsm version.
After installation of the dsm , I choose not to configure a raid.([https://www.synology.com/en-us/knowledgebase/DSM/tutorial/General/How_to_reset_your_Synology_NAS]) reboot the synology.
Once rebooted, I added the 9 old other disks and connected with ssh to my synology.
Find out raid information on your disks
bash-4.3# mdadm --examine /dev/sd[a-z]
mdadm: No md superblock detected on /dev/sda.
mdadm: No md superblock detected on /dev/sdb.
mdadm: No md superblock detected on /dev/sdc.
mdadm: No md superblock detected on /dev/sdd.
mdadm: No md superblock detected on /dev/sde.
mdadm: No md superblock detected on /dev/sdf.
mdadm: No md superblock detected on /dev/sdg.
mdadm: No md superblock detected on /dev/sdh.
mdadm: No md superblock detected on /dev/sdi.
mdadm: No md superblock detected on /dev/sdj
Those are the raid configured by the DSM so, didn't find anything on my disks
bash-4.3# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sdj2[0]
2097088 blocks [10/1] [U_________]
md0 : active raid1 sdj1[0]
2490176 blocks [10/1] [U_________]
Trying to assemble raids with scan option
bash-4.3# mdadm --assemble --scan
Seems to work !
bash-4.3# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md125 : active raid1 sda1[0] sdi1[8] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
2490176 blocks [10/9] [UUUUUUUUU_]
md126 : active raid1 sda2[0] sdi2[8] sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
2097088 blocks [10/9] [UUUUUUUUU_]
md127 : active raid5 sda5[0] sdi5[8] sdh5[7] sdg5[6] sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1]
35120552832 blocks super 1.2 level 5, 64k chunk, algorithm 2 [10/9] [UUUUUUUUU_]
md1 : active raid1 sdj2[0]
2097088 blocks [10/1] [U_________]
md0 : active raid1 sdj1[0]
2490176 blocks [10/1] [U_________]
unused devices: <none>
Now, I'd like to be able to mount my raids.
I'll try to mount the raid md127 as it seems to be the biggest one (the one containing my data)
bash-4.3# mkdir /volume_restore
bash-4.3# mount /dev/md127 /volume_restore/
mount: unknown filesystem type 'LVM2_member'
I try to find some information about the Volume Group
bash-4.3# vgdisplay
--- Volume group ---
VG Name vg1000
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 32.71 TiB
PE Size 4.00 MiB
Total PE 8574353
Alloc PE / Size 8574353 / 32.71 TiB
Free PE / Size 0 / 0
VG UUID Mxjnuy-PmQl-3TBT-zUa2-kBj8-j3AO-PNibo3
There is a Volume Group
bash-4.3# lvdisplay
--- Logical volume ---
LV Path /dev/vg1000/lv
LV Name lv
VG Name vg1000
LV UUID u1Ik6T-BQDC-ljKt-TocR-brIQ-5g6R-BR0JTv
LV Write Access read/write
LV Creation host, time ,
LV Status NOT available
LV Size 32.71 TiB
Current LE 8574353
Segments 1
Allocation inherit
Read ahead sectors auto
And a logical Volume
A can't find the vg1000 in /dev/vg1000. I seems to be inactive. So I activate it doing:
bash-4.3# vgchange -ay
1 logical volume(s) in volume group "vg1000" now active
Now I'm able to mount it!
mount /dev/vg1000/lv /volume_restore/
@ this point make a backup !
We'll now "merge the raids"
bash-4.3# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md125 : active raid1 sda1[0] sdi1[8] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
2490176 blocks [10/9] [UUUUUUUUU_]
md126 : active raid1 sda2[0] sdi2[8] sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
2097088 blocks [10/9] [UUUUUUUUU_]
md127 : active raid5 sda5[0] sdi5[8] sdh5[7] sdg5[6] sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1]
35120552832 blocks super 1.2 level 5, 64k chunk, algorithm 2 [10/9] [UUUUUUUUU_]
md1 : active raid1 sdj2[0]
2097088 blocks [10/1] [U_________]
md0 : active raid1 sdj1[0]
2490176 blocks [10/1] [U_________]
md125 seems to be the old root partition of my former RAID . I'll try to propagate the md0 to all the disks
first stop /dev/md125
mdadm --stop /dev/md125
check that'is stopped
bash-4.3# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md126 : active raid1 sda2[0] sdi2[8] sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
2097088 blocks [10/9] [UUUUUUUUU_]
md127 : active raid5 sda5[0] sdi5[8] sdh5[7] sdg5[6] sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1]
35120552832 blocks super 1.2 level 5, 64k chunk, algorithm 2 [10/9] [UUUUUUUUU_]
md1 : active raid1 sdj2[0]
2097088 blocks [10/1] [U_________]
md0 : active raid1 sdj1[0]
2490176 blocks [10/1] [U_________]
now add all the former partitions of your disks to the raid
bash-4.3# /sbin/mdadm --add /dev/md0 /dev/sda1 /dev/sdi1 /dev/sdh1 /dev/sdg1 /dev/sdf1 /dev/sdc1 /dev/sdb1
mdadm: added /dev/sda1
mdadm: added /dev/sdi1
mdadm: added /dev/sdh1
mdadm: added /dev/sdg1
mdadm: added /dev/sdf1
mdadm: added /dev/sdc1
mdadm: added /dev/sdb1
check if it worked
bash-4.3# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md126 : active raid1 sda2[0] sdi2[8] sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
2097088 blocks [10/9] [UUUUUUUUU_]
md127 : active raid5 sda5[0] sdi5[8] sdh5[7] sdg5[6] sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1]
35120552832 blocks super 1.2 level 5, 64k chunk, algorithm 2 [10/9] [UUUUUUUUU_]
md1 : active raid1 sdj2[0]
2097088 blocks [10/1] [U_________]
md0 : active raid1 sdb1[10](S) sdc1[11](S) sdf1[12](S) sdg1[13](S) sdh1[14](S) sdi1[15](S) sda1[16] sdj1[0]
2490176 blocks [10/1] [U_________]
[>....................] recovery = 2.4% (60032/2490176) finish=3.3min speed=12006K/sec
You can do the same for md1
For you're data raid, you'll need to create the partition (in my case sdj5) on your "new disk".
I did it quite easily using parted on /dev/sda to know the propreties of the partition
rbash-4.3# parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
print
Model: WDC WD4000F9YZ-09N20 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 131kB 2550MB 2550MB ext4 raid
2 2550MB 4698MB 2147MB linux-swap(v1) raid
5 4840MB 4001GB 3996GB raid
The created the same partition using parted /dev/sdj
bash-4.3# parted /dev/sdj
mkpart primary 4840MB 4001GB
set 1 raid on
Model: WDC WD4000F9YZ-09N20 (scsi)
Disk /dev/sdj: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 2551MB 2550MB ext4 raid
2 2551MB 4699MB 2147MB linux-swap(v1) raid
3 4840MB 4001GB 3996GB raid
quit
At this time it created a partition n°3 but I don't mind.
I only need to add this partition to my raid doing:
bash-4.3# /sbin/mdadm --add /dev/md127 /dev/sdj3
You can then check if your raid is rebuilding with
bash-4.3# cat /proc/mdstat
Related videos on Youtube
Hadrien Huvelle
Updated on September 18, 2022Comments
-
Hadrien Huvelle almost 2 years
My Synology (10 disks) suddenly doesn't reboot.
Connecting it with the serial port, I could manage to boot on "Synology 1" and "Synology 2"
Synology 1, is a kind of "recovery partition" allowing you to recover your DSM with the synology assistant.
Synology 2, is the default boot option and boots on your DSM. In my case, the synology server fails to boot.
I've an raid 5 of 10 disks.
How to recover it