GitLab issuing temporary IP bans - 403 forbidden

27,644

Solution 1

We are running Gitlab EE and for us this issue was caused by a combination of using git lfs inside a build on a Gitlab CI runner, and having installed the rack-attack gem on the Gitlab server.

Background

In order to work around an issue with git-lfs 1.2.1 (where it insisted on requiring username and password despite cloning a public repository), the build contained this line:

git clone https://fakeuser:[email protected]/group/repo.git

On build, this resulted in every LFS request from the runner triggering a login attempt with fakeuser, which obviously failed every time. However, since no login was actually required by the server, the client could continue downloading the files using LFS, and the build passed.

Issue

The IP banning started when the package rack-attack was installed. By default, after 10 failed login attempts, rack-attack bans the origin IP for one hour. This resulted in all runners being completely blocked from Gitlab (even visiting the web page from the runner would return 403 Forbidden).

Workaround (insecure)

A short-term workaround, if the servers (Gitlab runners in our case) are trusted, is to add the servers' IPs to a whitelist in the rack-attack configuration. Adjusting the ban time, or allowing more failed attempts, is also possible.

Example configuration in /etc/gitlab/gitlab.rb:

gitlab_rails['rack_attack_git_basic_auth'] = {
  'enabled' => true,
  'ip_whitelist' => ["192.168.123.123", "192.168.123.124"],
  'maxretry' => 10,
  'findtime' => 60,
  'bantime' => 600
}

In this example, we are whitelisting the servers 192.168.123.123 and 192.168.123.124, and adjusting down the ban time from one hour to 10 minutes (600 seconds). maxretry = 10 allows a user to get the password wrong 10 times before ban, and findtime = 60 means that the failed attempts counter resets after 60 seconds.

Then, you should reconfigure gitlab before changes take effect: sudo gitlab-ctl reconfigure

More details, and for the YAML version of the config example, see gitlab.yml.example.

NOTE: whitelisting servers is insecure, as it fully disables blocking/throttling on the whitelisted IPs.

Solution

The solution to this problem should be to stop the failing login attempts, or possibly just reduce the ban time, as whitelisting leaves Gitlab vulnerable to password brute-force attacks from all whitelisted servers.

Solution 2

Follow next steps to remove ban for your IP

  1. Find the IPs that have been blocked in the production log:

    grep "Rack_Attack" /var/log/gitlab/gitlab-rails/production.log

  2. Since the blacklist is stored in Redis, you need to open up redis-cli:

    /opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket

  3. You can remove the block using the following syntax, replacing with the actual IP that is blacklisted:

    del cache:gitlab:rack::attack:allow2ban:ban:<ip>

Source on GitLab Docs: Remove blocked IPs from Rack Attack via Redis

Share:
27,644

Related videos on Youtube

Blazing
Author by

Blazing

Updated on October 12, 2020

Comments

  • Blazing
    Blazing over 3 years

    My GitLab instance setup will occasionally put in place an IP ban on our own IP address, resulting in all our users in the office getting 403 / Forbidden on any web page or git request.

    The ban is being put in place as a result of repeated errors authenticating, which is a separate problem altogether, but I would like to prevent our own IP address from being IP banned. It lasts for about one hour.

    In the nginx logs, nothing unusual pops up in the gitlab_access.log or gitlab_error.log files. The server is still running, and external IP addresses are unaffected while the ban is in place.

    I would like to be able to whitelist our own IP address, or to be able to disable the ban once it occurs (restarting gitlab doesn't remove the ban). If neither of these are possible, then just finding the setting to tweak the ban duration down from one hour would be OK too.

    • Wolfium
      Wolfium almost 7 years
      We are experiencing same issue, but not seeing failed logins, just some jobs, not too many, but more than usual, that build images, and this happens twice when peak jobs arise, the sometimes fails with resource forbidden but not failed login. Not sure if this helps but look another source of issue.
  • Tom Lord
    Tom Lord over 7 years
    RE: You solution: An alternative configuration that I am experimenting with is an exponential ban time on failed logins - I.e. something along the lines of: "After 20 failed logins within 10 seconds, ban for (short time). After 40 failed logins within 5 minutes, ban for (a bit longer). ...... After 200 failed logins within 1 day, ban for (much longer)".