Git cloning: remote end hung up unexpectedly, tried changing postBuffer but still failing

11,654

Solution 1

One option to reduce the size of the repo by cloning a single branch or my cloning only a certain amount of the past history.

git clone --depth=20 https://repo.git -b master

will clone only the master branch to a depth of 20 commits. Since this is a much smaller entity it will generally work, and then you can fetch other branches after. Not sure if you can recover the history after but for lots and lots of cases that is unimportant.

Solution 2

If this is an http transaction, you would need to contact BitBucket support for them to diagnose what went wrong on the server side.
As mentioned in, for example, "howto/use-git-daemon":

fatal: The remote end hung up unexpectedly

It only means that something went wrong.
To find out what went wrong, you have to ask the server.

Note that when BitBucket will use a Git 2.5+ (Q2 2015), the client might end up with a more explicit error message instead:

 request was larger than our maximum size xxx
 try setting GIT_HTTP_MAX_REQUEST_BUFFER"

(that is, setting GIT_HTTP_MAX_REQUEST_BUFFER on the Git repository hosting server)

See commit 6bc0cb5 by Jeff King (peff), 20 May 2015.
(Merged by Junio C Hamano -- gitster -- in commit 777e75b, 01 Jun 2015)
Test-adapted-from: Dennis Kaarsemaker (seveas)

The new environment variable is GIT_HTTP_MAX_REQUEST_BUFFER:

The GIT_HTTP_MAX_REQUEST_BUFFER environment variable (or the http.maxRequestBuffer config variable) may be set to change the largest ref negotiation request that git will handle during a fetch; any fetch requiring a larger buffer will not succeed.

This value should not normally need to be changed, but may be helpful if you are fetching from a repository with an extremely large number of refs.

The value can be specified with a unit (e.g., 100M for 100 megabytes). The default is 10 megabytes.

The explanation is very interesting:

http-backend: spool ref negotiation requests to buffer

When http-backend spawns "upload-pack" to do ref negotiation, it streams the http request body to upload-pack, who then streams the http response back to the client as it reads.
In theory, git can go full-duplex; the client can consume our response while it is still sending the request.
In practice, however, HTTP is a half-duplex protocol.
Even if our client is ready to read and write simultaneously, we may have other HTTP infrastructure in the way, including the webserver that spawns our CGI, or any intermediate proxies.

In at least one documented case, this leads to deadlock when trying a fetch over http.
What happens is basically:

  1. Apache proxies the request to the CGI, http-backend.
  2. http-backend gzip-inflates the data and sends the result to upload-pack.
  3. upload-pack acts on the data and generates output over the pipe back to Apache. Apache isn't reading because it's busy writing (step 1).

This works fine most of the time, because the upload-pack output ends up in a system pipe buffer, and Apache reads it as soon as it finishes writing. But if both the request and the response exceed the system pipe buffer size, then we deadlock (Apache blocks writing to http-backend, http-backend blocks writing to upload-pack, and upload-pack blocks writing to Apache).

We need to break the deadlock by spooling either the input or the output. In this case, it's ideal to spool the input, because Apache does not start reading either stdout or stderr until we have consumed all of the input. So until we do so, we cannot even get an error message out to the client.

The solution is fairly straight-forward: we read the request body into an in-memory buffer in http-backend, freeing up Apache, and then feed the data ourselves to upload-pack.

Share:
11,654
Erica Stockwell-Alpert
Author by

Erica Stockwell-Alpert

merge delete

Updated on June 13, 2022

Comments

  • Erica Stockwell-Alpert
    Erica Stockwell-Alpert almost 2 years

    I'm trying to clone a repository. The first time I got to 82%, then it didn't budge for half an hour so I cancelled the clone and started over. After that, every time I try to clone it, I get between 6-10%, and then it fails with the error "The remote end hung up unexpectedly, early EOF." I looked up the error and tried every solution I could find, with the most popular solution being to increase postBuffer to the maximum size. However, it still keeps failing every time.

    I'm not sure if it makes a difference, but I'm not trying to check in code, which was what most of the other people reporting this issue seemed to be trying to do. I'm trying to clone a repository.

    • Etan Reisner
      Etan Reisner about 9 years
      What transport for the clone are you using?
    • Erica Stockwell-Alpert
      Erica Stockwell-Alpert about 9 years
      git bash, is that what you mean?
    • Etan Reisner
      Etan Reisner about 9 years
      No. I meant http/ssh/etc.? What does the clone URI look like? postBuffer is an http setting that relates to sending data to the server I believe.
    • Erica Stockwell-Alpert
      Erica Stockwell-Alpert about 9 years
  • RobisonSantos
    RobisonSantos over 6 years
    Wha if the same error "fatal: The remote end hung up unexpectedly error: pack-objects died of signal 13" happens when accessing git via ssh ?
  • VonC
    VonC over 6 years
    @RobisonSantos Maybe try and check if you are not pushing objects that are too big for the remote receiver to handle: stackoverflow.com/a/18560284/6309
  • chrset
    chrset over 4 years
    If, at a later time, you want to extend your copy to contain the full history you just need to call the fetch command with the --unshallow parameter: git fetch --unshallow