How Do I Find The Hardware Block Read Size for My Hard Drive?
Solution 1
The lsblk command is great for this:
lsblk -o NAME,PHY-SeC
The results:
NAME PHY-SEC
sda 512
├─sda1 512
├─sda2 512
└─sda5 512
Solution 2
Linux exposes the physical sector size in files /sys/block/sdX/queue/physical_block_size
. Although, to get the best performance you should probably do a little testing with different sizes and meassure. I could not find a clear answer in that using exactly the physical block size would get the optimal result (although I assume it cannot be a bad choice).
Solution 3
$ sudo hdparm -I /dev/sda | grep -i physical
Physical Sector size: 4096 bytes
Solution 4
Mine isn't intended to be a complete answer, but I hope it also helps.
Here is a little something from http://mark.koli.ch/2009/05/howto-whole-disk-backups-with-dd-gzip-and-p7zip.html
3 - Determine the Appropriate Block Size
For a quicker backup, it can help to nail down the optimal block size of the disk device you are going to backup. Assuming you are going to backup /dev/sda, here's how you can use the fdisk command to determine the best block size:
rescuecd#/> /sbin/fdisk -l /dev/sda | grep Units
Units = cylinders of 16065 * 512 = 8225280 bytes
Note the fdisk output says "cylinders of 16065 * 512". This means that there are 512 bytes per block on the disk. You can significantly improve the speed of the backup by increasing the block size by a multiple of 2 to 4. In this case, an optimal block size might be 1k (512*2) or 2k (512*4). BTW, getting greedy and using a block size of 5k (512*10) or something excessive won't help; eventually the system will bottleneck at the device itself and you won't be able to squeeze out any additional performance from the backup process.(emphasis added)
I suspect the difference in performance between a near-optimal and optimal block size for a given configuration is negligible unless the data set is enormous. Indeed, a user at FixUnix (post from 2007) claimed his optimal times were only 5% faster than the sub-optimal ones. Maybe you can squeeze a little more efficiency out by using a multiple of the "cluster" size or filesystem block size.
Of course, if you move too far away to either side of the optimal block size you'll run into trouble.
The bottom line is you will likely gain only around 5% in performance (i.e. 3 minutes per hour) with the absolute optimal block size, so consider whether it is worth your time and effort to research further. As long as you stay away from extreme values, you should not suffer.
Solution 5
Each disk transfer generates an interrupt that processor must handle. Typical 50Mb/s disk will want to generate 100000 of them each second at 512b block size Normal processor would handle 10s of thousands of those, thus bigger (2^x) block size would be more handy (4k as default FS block size in most systems up to 64k ISA DMA size) would be more practical...
Related videos on Youtube
AaronS
Concordia University Student, software engineering program.
Updated on September 17, 2022Comments
-
AaronS over 1 year
I'm trying to figure out the optimal size for a large copy from my hard drive using dd. I'm trying to figure out what the best blocksize to use it, which I would assume is the hardware block size for that drive.
-
quack quixote about 14 yearsi've got a Debian Lenny system (2.6.26 kernel) that only exposes a hw_sector_size in that location, and a newer Ubuntu Karmic system (2.6.31 kernel) that provides both. so this is somewhat dependent on the kernel in use.
-
quack quixote about 14 yearssome reason for using
echo "p" | /sbin/fdisk /dev/sda...
instead of/sbin/fdisk -l /dev/sda...
? the second will be cleaner and won't attempt to make any changes. -
Theb about 14 yearsYou would be best asking Mark Kolich (linked). He was creating a backup, and I only quoted a section of his article.
-
tvdo over 11 years@MarkC The linked article uses
/sbin/fdisk -l /dev/sda | grep Units
. It might have been changed in the last two years. In any case, I've updated your answer. -
CMCDragonkai almost 10 yearsDoes it differentiate between logical size and physical size?
-
sjas about 9 yearsThis should be the accepted answer, since it is the only one providing the real physical value.
-
sjas over 8 yearsWill not provide the actual physical size.
-
sjas over 8 yearsWill not provide the actual physical size.
-
sjas over 8 yearsCould you clarify?
-
sjas over 8 years
hdparm -I /dev/sda | grep Sector
is nicer, as it will show both physical and logical sizes at once, for easy comparison. -
soger almost 8 yearsIMO this is the most useful answer mainly because of the last bolded paragraph. Linux works very hard to optimize disk access so as long as you use the appropriate io scheduler and dirty buffer settings for your disk a block size of 8192 bytes should be okay for any situation.
-
soger almost 8 yearsWorks for me, PHY-SEC shows the correct physical and LOG-SEC shows the logical size.
-
Theb over 7 years@Sjas What he or she is saying is, apparently, that each sector is transferred separately with an associated "interrupt" that processor must handle. A larger block size means fewer interrupts (and therefore fewer CPU cycles used) for the same amount of data.
-
Hashim Aziz over 6 years@sjas Could you expand? How do you know this?
-
sjas over 6 years@Hashim i tested it some old harddisks i had where some where some had 512b and some 4k sector size. superuser.com/a/426015/145072 is the solution that actually worked. everything besides
hdparm
will likely lie to you. -
把友情留在无盐 over 4 years@sjas, speaking about the "only one", this is not so different from
/sys/block/*/queue/*_block_size
, since the kernel driver also queries information from the device the same way hdarm does. -
myrdd over 3 yearsif you want to get some more info on the devices, or if you want a shorter command, use
lsblk -t
, it is equivalent to -o NAME,ALIGNMENT,MIN-IO,OPT-IO,PHY-SEC,LOG-SEC,ROTA,SCHED,RQ-SIZE,RA,WSAME.