Very slow volume expand on Synology ds2413+
Solution 1
Yes you can tweak the settings during the rebuild process. It won't hurt anything.
- Enable SSH in control Panel -> Terminal -> Check SSH
- Log on using Putty or a similar SSH program
- use username
root
and password is the same as the admin account Type this command:
echo 50000 >/proc/sys/dev/raid/speed_limit_min
and your rebuild speed should greatly increase.
If you want more speed try this also:
echo 16384 >/sys/block/md2/md/stripe_cache_size
Solution 2
I had the same problem, but i put in this and now its rebuilding faster.
The DS2413+ is build with this.
- Synology Raid system 8 Harddisk 4 TB
- 2 Harddisk 4TB
- 2 SSD 250GB Disk
It was going from 0.0% to 0.78% 3 Hours and 41 min, but when i use this 2 command lines, it got faster, just remember, that you need to tweek you speed for the rebuild and what IT systems you are using.
When i login to the DS2413+, i just put in this 2 commands
echo 500000 /proc/sys/raid/speed_limit_max
echo 250000 /proc/sys/raid/speed_limit_min
Now for the last 12 min it has jump from 0,78% to 1.02%, so its builing faster.
I hope that everyone can use it.
Related videos on Youtube
Thomas
Updated on September 18, 2022Comments
-
Thomas almost 2 years
I've posted this elsewhere on the internet aswell, and will obviously reply in both places if a proper solution is found.
I am attempting to expand SHR volumne (2x4TB, 1x2TB) with additional 4TB drive on a ds2413+. The 4TB drives are WD Red, the 2TB are WD Green.
Expansion has completed 12% in around 4 hours and 30 minutes. Extrapolating this tells me it will take another 36 hours. After that, I need to migrate 3 other old 2TB drives to this new NAS (First move data, then expand volume again with these 3 additional drives).
It seems that the NAS will not be deployable for close to a week.
Is there any way to speed up the process ?
Some facts/info:
Device is Synology ds2413+
DSM 5.0, update 1
The device can be left alone while expanding, if needed.
CPU is hovering at around 14%
RAM is 5% (of 2GB)
/proc/sys/dev/raid/speed_limit_max states 200000
/proc/sys/dev/raid/speed_limit_min states 10000
Noone has been using the device while expanding so far.
/proc/mdstat: (Note the very slow speed)
And here is the output of
cat /proc/mdstat
:cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md3 : active raid5 sde6[2] sda6[1] sdc6[0] 1953494784 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] resync=DELAYED md2 : active raid5 sde5[3] sda5[0] sdc5[2] sdb5[1] 3897559296 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [=====>...............] reshape = 26.9% (525738624/1948779648) finish=1007.8min speed=23531K/sec md1 : active raid1 sde2[3] sda2[0] sdb2[1] sdc2[2] 2097088 blocks [12/4] [UUUU________] md0 : active raid1 sde1[3] sda1[0] sdb1[1] sdc1[2] 2490176 blocks [12/4] [UUUU________]
The results from my immediate googling are several years old, and for an older version of DSM. The only thing I saw was raising
speed_limit_max
andspeed_limit_min
, but there was no mention of whether it could be done during the expansion process.What can I do to increase the speed of this expansion?
Would there be anything wrong with simply raising the speed_limit_max and speed_limit_min values ?
Are other details needed to assist ?
Any help would be greatly appreciated.
EDIT: Adding this in case anyone finds this using google and was wondering how long expansion actually takes.
I didn't end up tweaking anything for the expansion process.
Initial setup was 2x4TB, 1x2TB in SHR-1.
First expand was with an additional 1x4TB drive. There was ~3TB of data on the volume. This took ~35h:30m.
Second expand was with an additional 3x2TB drives. There was ~6,5TB og data on the volume. This took ~18h:15m.
All 4TB drives were WD Red, and 2TB drives WD Green.
And just for the record: You can't move form SHR-1 to SHR-2 after the volume has been created. You need to choose that during setup (Which I'd like to have done in retrospect. Due to the potential long rebuildtime in case of a drive failure and subsequent risk of an additional failure during the rebuild process)