Update a development team with rewritten Git repo history, removing big files

12,145

Solution 1

Yes, your solution will work. You also have another option: instead of doing this on the central repo, run the filter on your clone and then push it back with git push --force --all. This will force the server to accept the new branches from your repository. This replaces step 2 only; the other steps will be the same.

If your developers are pretty Git-savvy, then they might not have to delete their old copies; for example, they could fetch the new remotes and rebase their topic branches as appropriate.

Solution 2

Your plan is good (though it would be better to perform the filtering on a bare clone of your repository, rather than on the central server), but in preference to git-filter-branch you should use my BFG Repo-Cleaner, a faster, simpler alternative to git-filter-branch designed specifically for removing large files from Git repos.

Download the Java jar (requires Java 6 or above) and run this command:

$ java -jar bfg.jar  --strip-blobs-bigger-than 1MB  my-repo.git

Any blob over 1MB in size (that isn't in your latest commit) will be totally removed from your repository's history. You can then use git gc to clean away the dead data:

$ git gc --prune=now --aggressive

The BFG is typically 10-50x faster than running git-filter-branch and the options are tailored around these two common use-cases:

  • Removing Crazy Big Files
  • Removing Passwords, Credentials & other Private data

Solution 3

If you don't make your developers re-clone it's likely that they will manage to drag the large files back in. For example, if they carefully splice onto the new history you will create and then happen to git merge from a local project branch that was not rebased, the parents of the merge commit will include the project branch which ultimately points at the entire history you erased with git filter-branch.

Solution 4

Your solution is not complete. You should include --tag-name-filter cat as an argument to filter branch so that the tags that contain the large files are changed as well. You should also modify all refs instead of just HEAD since the commit could be in multiple branches.

Here is some better code:

git filter-branch --index-filter 'git rm --cached --ignore-unmatch big_1.zip big_2.zip etc.zip' --tag-name-filter cat -- --all

Github has a good guide: https://help.github.com/articles/remove-sensitive-data

Share:
12,145
rlkw1024
Author by

rlkw1024

Updated on June 22, 2022

Comments

  • rlkw1024
    rlkw1024 almost 2 years

    I have a git repo with some very large binaries in it. I no longer need them, and I don't care about being able to checkout the files from earlier commits. So, to reduce the repo size, I want to delete the binaries from the history altogether.

    After a web search, I concluded that my best (only?) option is to use git-filter-branch:

    git filter-branch --index-filter 'git rm --cached --ignore-unmatch big_1.zip big_2.zip etc.zip' HEAD
    

    Does this seem like a good approach so far?

    Assuming the answer is yes, I have another problem to contend with. The git manual has this warning:

    WARNING! The rewritten history will have different object names for all the objects and will not converge with the original branch. You will not be able to easily push and distribute the rewritten branch on top of the original branch. Please do not use this command if you do not know the full implications, and avoid using it anyway, if a simple single commit would suffice to fix your problem. (See the "RECOVERING FROM UPSTREAM REBASE" section in git-rebase(1) for further information about rewriting published history.)

    We have a remote repo on our server. Each developer pushes to and pulls from it. Based on the warning above (and my understanding of how git-filter-branch works), I don't think I'll be able to run git-filter-branch on my local copy and then push the changes.

    So, I'm tentatively planning to go through the following steps:

    1. Tell all my developers to commit, push, and stop working for a bit.
    2. Log into the server and run the filter on the central repo.
    3. Have everyone delete their old copies and clone again from the server.

    Does this sound right? Is this the best solution?