Allocation Unit Size for New Drive

6,496

You can never go wrong with sticking with the default. 4kb is the default block size used by most filesystems. You can shave a bit of overhead if you mostly store larger files by using a larger block size, but generally you should stick with the default/4k. Worst case there is that you waste a relatively small amount of disk space.

Share:
6,496

Related videos on Youtube

oldboy
Author by

oldboy

Updated on September 18, 2022

Comments

  • oldboy
    oldboy over 1 year

    I'm going to be loading my new 3TB drive with large files on Windows, and then hooking it up to my Linux server. This is the first time I've done this.

    I've been told that both Windows and Linux support the NTFS file system.

    After initializing the disk (obviously as GPT), however, I'm currently in the process of formatting the partition on Windows, but am unsure as to how much disk space I should allocate for the Allocation Unit Size if I plan to use it on both Windows and Linux.

    Should I set this to something other than default?

    The options are as follows:

    • Default
    • 512
    • 1024
    • 2048
    • 4096
    • 8192
    • 16K
    • 32K
    • 64K

    What is my best bet?

  • oldboy
    oldboy over 6 years
    I am only going to be storing larger file sizes, and, to my knowledge, selecting the largest AUS will also improve performance considering there are fewer blocks to search and I believe there will be less fragmentation. What are the benefits of selecting a smaller AUS if I'm only going to be storing large (1GB+) files??
  • psusi
    psusi over 6 years
    @Anthony, it is true that windows is terrible about fragmentation. A larger cluster size isn't likely to help with that. Better performance due to "fewer blocks to search" isn't a thing either. The only thing that really changes is that the block allocation bitmap needs fewer bits to indicate what blocks are free or in use, so you save a tiny amount of space. On the other hand, the larger the cluster size, the more space you typically waste in the last cluster of every file.
  • oldboy
    oldboy over 6 years
    in the last cluster of every file? what do you mean by that?
  • psusi
    psusi over 6 years
    @Anthony, if files have a random size in bytes, then on average, they are going to fill x full blocks, and the last block will be half full, on average. Thus a large block size tends to waste more space when storing many smaller files. This was one of the shortcomings of FAT16, which could use 64k clusters to handle a 2 GiB disk, but when storing thousands of files, wasted an average of 32 KiB per file, which adds up to quite a bit.
  • oldboy
    oldboy over 6 years
    Makes sense. But, again, I'm only storing files that are typically anywhere from 3-12GB in size, so I'm sure the maximum AUS (64K) isn't going to hurt me. Nonetheless, I'm struggling to understand why the "empty/remaining" spaces in each block isn't utilized? Hypothetically, for instance, if I have an AUS size of 1K per block and I go to store a 2K file on the drive, that file will obviously and necessarily be broken up into two separate blocks. If it's capable of "breaking up/spreading out" files over different blocks, why doesn't it utilize all of the space in the scenario that you outlined?
  • psusi
    psusi over 6 years
    @Anthony, because blocks can only be assigned to one file, so if the file does not need the whole block, whatever is left is wasted.