GitLab and Caddy

Why On Gehenna Are You Trying To Run GitLab?

Yes, well. Good question!. I want to take advantage of CI/CD for setting up an automated build of some website-based documentation projects that I’m running at home. Specifically, I am currently using dendron to manage documentation for the home tech stack, and want to build a home manual after reading this (and the subsequent discussion on HN) - probably using the suggested MkDocs tooling.

The requirements we have for this are:

  • Both Grumpy Metal Girl and myself will be editing these document sites, so we need to make sure that we don’t tread over each other, and have the ability to roll back changes.
  • When one of us makes a change, we need it to automatically rebuild and redeploy the website in question automatically, regardless of which machine we’re doing the editing on.
  • The websites should be self-hosted on the home GrumpyNet and not visible to the outside world.

The second point is the reason that I need something that can automatically trigger builds once a change is committed - CI/CD in other words. And it’s that last point that stops me using an online service like GitHub or GitLab itself. It’s not that big a deal in practice as I use a VPN when I’m out and about, so I’ll always be able to see the generated docs wherever I am.

Given my general grumpy nerdy inquisitive nature, running my own GitLab installation here at Grumpy Labs seemed to be the best way to meet the requirements. How hard could it be?

The Setup

The GitLab online instructions are pretty clear and well written. The docker compose file that they suggest is pretty straightforward:

version: '3.6'
services:
  web:
    image: 'gitlab/gitlab-ee:latest'
    restart: always
    hostname: 'gitlab.example.com'
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        external_url 'https://gitlab.example.com'
        # Add any other gitlab.rb configuration here, each on its own line        
    ports:
      - '80:80'
      - '443:443'
      - '22:22'
    volumes:
      - '$GITLAB_HOME/config:/etc/gitlab'
      - '$GITLAB_HOME/logs:/var/log/gitlab'
      - '$GITLAB_HOME/data:/var/opt/gitlab'
    shm_size: '256m'

I did the usual changes to these things, Adding a container name, updating the various machine names, and setting it to restart unless stopped. I took note of the reference to “any other configuration here” but didn’t think too much of it, so just modified external_url to point to the server name and port that I wanted to use for https. For the rest, I figured that once it was all up and running, I could hack the config file that would be generated to fine-tune things.

The only other thing to do was to update my caddy reverse proxy config to simplify access to it. At Grumpy Labs, we have a local domain name that we use for all of our machines, and SSL certificates are generated and renewed using dehydrated. Caddy uses these certificates and redirects any requests for these custom domains to the specific port on the machine that is running the service. In this case, I wanted to run GitLab on port 8385, and talk to it via the url gitlab.mydomaingoeshere.com. This had worked well for all the other services that were running on the same machine, so I added the following to my Caddyfile:

gitlab.mydomaingoeshere.com {
  tls /var/lib/dehydrated/certs/gitlab.mydomaingoeshere.com/cert.pem /var/lib/dehydrated/certs/gitlab.mydomaingoeshere.com/privkey.pem
  reverse_proxy localhost:8385
}

I fired up the docker container, restarted caddy, and prepared for everything to work perfectly out of the box. Sometimes, I’m just too optimistic for my own good…

Why Is That Port Number There?

Trying to connect to gitlab.mydomaingoeshere.com, I was met with an nginx error screen, telling me that it was having trouble because it couldn’t redirect from HTTP to HTTPS. I immediate followed in the long-hallowed footsteps of my forebears by restarting both the container and caddy, hoping for something to change. It didn’t. After a lot of head scratching, I eventually tried gitlab.mydomaingoeshere.com:8385 and hey presto! I got a login screen! But why did I have to have that port number in the URL? It looks ugly, and besides, caddy should be taking care of all of this for me. Cue much grumping and gnashing of teeth.

Over the next couple of days, I spent a lot of time searching for answers to the issue. I looked at caddy proxy header forwarding. Nope, not it. Setting the nginx settings in the GitLab container to have different ports, proxy handling, listening hosts, and a bunch of other things. Nada. Changes to the external_url entry in the docker compose file, both with and without port numbers. Not a bean. I took a combinatorial approach, with five or six different settings all being tried in different combinations. All that did was waste time.

The outcome was always the same. Either gitlab.mydomaingoeshere.com didn’t give anything or it complained that it was having trouble with HTTP and HTTPS. If I tried adding the port number, it either didn’t load, or took me to the login screen again, a partial victory at best. None of the outcomes were what I was looking for. Surely I wasn’t asking for too much? I couldn’t be the only user in the world trying to run GitLab with a reverse proxy and home-sourced SSL certificates…

The Breakthrough

After a couple of days of banging my head against a Caddy-shaped wall, I decided to try a slightly different search term. Kagi-ing for ‘The plain HTTP request was sent to HTTPS port’ (the error I was originally getting), I came across this post on the Caddy forum. Bingo! That’s exactly it! I’d forgotten the number one rule for my simple reverse proxy setup: The server you’re proxying to doesn’t have to run with HTTPS - Caddy will take care of that for you.

I’d been so focused on running GitLab with SSL certificates that it never occurred to me that this could be the issue. At the end of the day, as long as caddy is serving everything up via HTTPS, it doesn’t matter whether the connection between caddy and the GitLab container is unencrypted. If someone is able to snoop on that connection, they’ve already got access to my server, so all hope is pretty much gone at that point anyway. So the solution was to run GitLab in a docker container with no certificates and no HTTPS requirement. Easy!

Or so I thought. Again. Grump! Removing the SSL certificates and turning off the use of LetsEncrypt in the GitLab config file didn’t seem to help. I was still getting the same issues! I looked at all the various settings that seemed relevant, tried disabling them all in different combinations, and I still didn’t get any further. Luckily though, now that I knew I needed to run off SSL, the Kagi search was a bit easier and returned a usable answer much faster.

This post on the GitLab site indicated that removing the external_url setting from the compose file would do the trick. The setting that was part of the original sample compose file in big UPPERCASED LETTERS. Surely it couldn’t be that easy?

Yes, it was that easy. Removing the external_url env variable and only supplying a mapping for port 80 in the docker compose file worked like a charm gitlab.mydomaingoeshere.com went straight to the login page, and I could log in and start playing around. VICTORY!

TL;CBA

If you want to run GitLab in a docker container using your own certificates via caddy, make sure you do the following:

  • Run GitLab in non-SSL mode by not setting external_url.
  • Don’t use port 443 on the container, stick to using port 80.
  • Map the domain name you want to use to the container’s mapped port 80 with your own SSL certificates supplied.
  • Reduce grumps to 0.

And that should hopefully be it!

P.S. GitLab Is Pretty Cool

Having played around with it for a couple of days now, self-hosted GitLab is pretty neat. Stuff just works out of the box. Slack integration? No problem. Want to force users to use 2FA? Sure, tick a box and enforce it for all users. I always assumed this would be a bit fiddly to get up and running, but they’ve clearly put a fair bit of work into getting things working smoothly, which is great to see. The next step is to get the CI/CD pipeline up and running for the documentation projects we want to create. I’m expecting this to be a bit tougher than getting it up and running, so keep your eyes peeled for a follow-up at some stage.