Monday, January 29, 2024

Selfhosting supabase in k8s with helm and keycloak sso

Supabase is a free open source Firebase alternative.

The company behing the product is offering a SaaS solution and also the source code to host it yourself.

For development purposes, the selfhosting is done via docker compose.

When running it in production, in a kubernetes cluster, you can use the helm charts for this.

There are currently two helm charts available for this.

https://github.com/supabase-community/supabase-kubernetes from the community and https://bitnami.com/stack/supabase/helm from bitnami.

We did try both, but only got the bitnami version running.

The problem with self hosting supabase is, that you won't get any support from the company and the community seems to be small for self hosting.

So you are basically on your own when running it in production on your infrastructure.

The problem we faced with the supabase-cummunity helm chart was, that user registration via UI was not possible, apparently a known problem.

We did then try out the bitnami helm chart and got it working, with extensive googling and trying out.

The advantage of the bitnami package is, that it works similar to other bitnami packages, so you can reuse your knowledge in that area.

The final struggle we had, is to enable sso via keycloak, so we could reuse or single point of authentifaction for this.

For the whole helm chart (as of version 0.23.11) to work correctly, you have to define these things:

1. The URL of your application, let's say it's https://ai-frontend.company.com

2. The supabase URL of the supabase api, let's say it's https://supabase-api.company.com

3. The sso URL of your keycloak server, let's say it's https://sso.company.com

What we don't have at the moment, is exposing the studio url, since this needs additional authentification protection, which we did not yet get working, but that url could then be

4. https://supabase.company.com

When using the current 0.23.11 helm chart, you will receive auth errors, see  https://github.com/bitnami/charts/issues/16988

To prevent these, you can use the v2.91.0 version of the docker image

auth:
  image:
    registry: docker.io
    #
    # From https://github.com/bitnami/charts/issues/16988
    #
    repository: supabase/gotrue
    tag: v2.91.0

The final helm values are then something like this:

publicURL: "https://supabase-api.company.com"

auth:
  defaultConfig: |
    GOTRUE_API_HOST: "0.0.0.0"
    GOTRUE_API_PORT: {{ .Values.auth.containerPorts.http | quote }}
    GOTRUE_SITE_URL: "https://ai-frontend.company.com"
    GOTRUE_URI_ALLOW_LIST: "*"
    GOTRUE_DISABLE_SIGNUP: "false"
    GOTRUE_DB_DRIVER: "postgres"
    GOTRUE_JWT_DEFAULT_GROUP_NAME: "authenticated"
    GOTRUE_JWT_ADMIN_ROLES: "service_role"
    GOTRUE_JWT_AUD: "authenticated"
    GOTRUE_JWT_EXP: "3600"
    GOTRUE_EXTERNAL_EMAIL_ENABLED: "true"
    GOTRUE_MAILER_AUTOCONFIRM: "true"
    GOTRUE_SMTP_ADMIN_EMAIL: "no-reply@company.com"
    GOTRUE_SMTP_HOST: "inbucket.default.svc.cluster.local"
    GOTRUE_SMTP_PORT: "2500"
    GOTRUE_SMTP_SENDER_NAME: "noreply@company.com"
    GOTRUE_EXTERNAL_PHONE_ENABLED: "false"
    GOTRUE_SMS_AUTOCONFIRM: "false"
    GOTRUE_MAILER_URLPATHS_INVITE: "https://supabase-api.company.com/auth/v1/verify"
    GOTRUE_MAILER_URLPATHS_CONFIRMATION: "https://supabase-api.company.com/auth/v1/verify"
    GOTRUE_MAILER_URLPATHS_RECOVERY: "https://supabase-api.company.com/auth/v1/verify"
    GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: "https://supabase-api.company.com/auth/v1/verify"
    GOTRUE_EXTERNAL_KEYCLOAK_ENABLED: "true"
    GOTRUE_EXTERNAL_KEYCLOAK_CLIENT_ID: "Supabase_Client_ID"
    GOTRUE_EXTERNAL_KEYCLOAK_SECRET: "supabase-sso-client-secret"
    GOTRUE_EXTERNAL_KEYCLOAK_REDIRECT_URI: "https://supabase-api.company.com/auth/v1/callback"
    GOTRUE_EXTERNAL_KEYCLOAK_URL: "https://sso.company.com/auth/realms/MYREALM"

These are the paramters required to get it up and running, and login via email should work now.

Unfortunally sso login will not yet work, does to some errro 502 gateway errors and no vmRoute found errors.

So the normal  app url works, it the redirects you to the auth endpoint of supabase, which the redirects to your keycloak server. Once this one has sucessfully authenticated, it returns you to the supabase-api to authenticated the backend and then it should redirect you back to the original frontend.

This did require some additional hours of investigation, resulting in the case, that the http headers used after keycloak authentication are to big for the default kong and our nginx ingress controller.

The nasty part of finding this was, that KONG does log the access and errors in a logfile inside the pod, and does not expose then to stdout/stderr.

https://github.com/bitnami/charts/issues/22755

The solution for the header size problem was, to tell kong about the larger headers via kong config.

kong:
   kong:
    extraEnvVars:
      - name: KONG_NGINX_PROXY_LARGE_CLIENT_HEADER_BUFFERS
        value: "64 128K"
      - name: KONG_NGINX_PROXY_PROXY_BUFFER_SIZE
        value: "128k"
      - name: KONG_NGINX_PROXY_PROXY_BUFFERS
        value: "64 128k"

This did solve the kong 502 gateway error, but still our main ingress controller did struggle with the large header and returning also a 502 gateway error. In your nginx ingress you will see these errors: upstream sent too big header while reading response header from upstream.

So we had to also increase these values on teh main ngin ingress controller

kong:
  ingress:
    enabled: true
    hostname: "supabase-api.company.com"
    tls: true
    annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: letsencrypt-issuer
      acme.cert-manager.io/http01-edit-in-place: "true"
      nginx.org/proxy-connect-timeout: "600s"
      nginx.org/proxy-send-timeout: "600s"
      nginx.org/proxy-read-timeout: "600s"
      nginx.org/client-max-body-size: "20m"
      nginx.org/proxy-buffer-size: "256k"
      nginx.org/proxy-buffers: "64 512k"
      nginx.org/large-client-header-buffers: "64 128k"
      nginx.org/proxy-busy-buffers-size: "512k"
      # Required for keycloak sso, since the header get big
      # https://www.cyberciti.biz/faq/nginx-upstream-sent-too-big-header-while-reading-response-header-from-upstream/

This then finally allowed to use the keycloak sso capabality of supabase.

Don't hesitate to comment/ask here, if you need more information about the k8s setup via helm.