Set Up Kubernetes Resources on Local Installation

In this section, you will create the necessary Kubernetes namespaces and configure key components, such as NGINX Ingress and Apache Pulsar, within your local Kubernetes cluster. These steps are crucial for preparing your environment to run GoodData.CN locally.

Set Up Kubernetes with NGINX Ingress

You will use k3d to create a Kubernetes cluster inside a Docker container. The minimum requirements for the cluster are:

  • At least 3 nodes on a Linux/amd64 platform.
  • The combined available capacity of the cluster, before installation, should be at least 6 CPUs and 18 GB of RAM, or double that if you omit the replicaCount: 1 parameter in your GoodData.CN installation configuration file.

Together with the Kubernetes cluster, you will also set up an NGINX Ingress Controller. This controller is a Kubernetes component that manages external access to services within a Kubernetes cluster, utilizing NGINX as a reverse proxy to route and load balance traffic based on rules defined in Ingress resources.

Steps:

  1. Create file nginx-helm-chart.yaml and add the following NGINX Ingress Controller configuration:

    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      name: ingress-nginx
      namespace: kube-system
    spec:
      repo: https://kubernetes.github.io/ingress-nginx
      chart: ingress-nginx
      version: 4.1.1
      targetNamespace: kube-system
      valuesContent: |-
        controller:
          config:
            use-forwarded-headers: "true"
          tolerations:
          - key: "node-role.kubernetes.io/master"
            operator: "Exists"
            effect: "NoSchedule"    
    
  2. Create file k3d-config.yaml and add the following k3d Kubernetes cluster configuration:

    apiVersion: k3d.io/v1alpha5
    kind: Simple
    metadata:
      name: gdcluster
    servers: 1
    agents: 2
    kubeAPI:
      host: "localhost"
      hostIP: "0.0.0.0"
      hostPort: "6443"
    network: k3d-default
    volumes:
    - volume: "${PWD}/nginx-helm-chart.yaml:/var/lib/rancher/k3s/server/manifests/ingress-nginx.yaml"
      nodeFilters:
      - server:0
    ports:
    - port: "443:443"
      nodeFilters:
      - loadbalancer
    - port: "80:80"
      nodeFilters:
      - loadbalancer
    registries:
      create:
        name: k3d-registry
        host: "0.0.0.0"
        hostPort: "5000"
        volumes:
        - registry-data:/var/lib/registry
      config: |
        mirrors:
          "k3d-registry:5000":
            endpoint:
              - http://k3d-registry:5000    
    options:
      k3d:
        wait: true
        timeout: "60s"
      k3s:
        extraArgs:
        - arg: --disable=traefik
          nodeFilters:
          - server:*
      kubeconfig:
        updateDefaultKubeconfig: true
        switchCurrentContext: true
    
  3. Create the k3d Kubernetes cluster as defined in the k3d-config.yaml file:

    k3d cluster create -c k3d-config.yaml 
    
  4. Ensure your k3d context is using the k3d-gdcluster cluster:

    kubectl config use-context k3d-gdcluster
    
  5. Ensure your pods and services are running, this may take up to several minutes:

    1. Check the pods:

      kubectl get pods -A
      

      Ensure all the pods are ready or completed:

      NAMESPACE     NAME                                            READY   STATUS      RESTARTS   AGE
      kube-system   coredns-6799fbcd5-nd72d                         1/1     Running     0          53s
      kube-system   helm-install-ingress-nginx-9brtw                0/1     Completed   0          53s
      kube-system   ingress-nginx-controller-77dcf769dc-hdp2v       1/1     Running     0          35s
      kube-system   local-path-provisioner-6f5d79df6-pknfx          1/1     Running     0          53s
      kube-system   metrics-server-54fd9b65b-txm94                  1/1     Running     0          53s
      kube-system   svclb-ingress-nginx-controller-5606dd8e-8hmp2   2/2     Running     0          35s
      kube-system   svclb-ingress-nginx-controller-5606dd8e-szt5z   2/2     Running     0          35s
      kube-system   svclb-ingress-nginx-controller-5606dd8e-z57kw   2/2     Running     0          35s
      
    2. Then check the services:

      kubectl get svc -A
      

      Ensure the ingress-nginx-controller is running as a LoadBalancer and contains some EXTERNAL-IPs:

      NAMESPACE     NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP                        PORT(S)                      AGE
      default       kubernetes                           ClusterIP      10.43.0.1       <none>                             443/TCP                      60s
      kube-system   ingress-nginx-controller             LoadBalancer   10.43.184.209   172.18.0.2,172.18.0.3,172.18.0.4   80:30599/TCP,443:31432/TCP   25s
      kube-system   ingress-nginx-controller-admission   ClusterIP      10.43.68.48     <none>                             443/TCP                      25s
      kube-system   kube-dns                             ClusterIP      10.43.0.10      <none>                             53/UDP,53/TCP,9153/TCP       55s
      kube-system   metrics-server                       ClusterIP      10.43.42.42     <none>                             443/TCP                      53s
      

Set Up Apache Pulsar

GoodData.CN uses Apache Pulsar as a message broker. You can install it using the Helm chart provided by Pulsar developers. By default, the chart deploys many components, so we recommend using the custom Helm values provided below to include only the essentials needed by GoodData.CN.

Steps:

  1. Pull the Pulsar Docker image from the official repository and tag it for local use:

    docker pull apachepulsar/pulsar:3.1.2
    docker tag apachepulsar/pulsar:3.1.2 localhost:5000/apachepulsar/pulsar:3.1.2
    
  2. Push the locally tagged Pulsar image to your Docker registry hosted at localhost:5000:

    docker push localhost:5000/apachepulsar/pulsar:3.1.2
    
  3. Create file customized-values-pulsar.yaml and add the following Pulsar Helm chart configuration:

    components:
      functions: false
      proxy: false
      toolset: false
      pulsar_manager: false
    
    defaultPulsarImageTag: 3.1.2
    
    images:
      zookeeper:
        repository: k3d-registry:5000/apachepulsar/pulsar
      bookie:
        repository: k3d-registry:5000/apachepulsar/pulsar
      autorecovery:
        repository: k3d-registry:5000/apachepulsar/pulsar
      broker:
        repository: k3d-registry:5000/apachepulsar/pulsar
    
    zookeeper:
      replicaCount: 3
      podManagementPolicy: OrderedReady
      podMonitor:
        enabled: false
      restartPodsOnConfigMapChange: true
      volumes:
        data:
          name: data
          size: 2Gi
          storageClassName: local-path
    
    bookkeeper:
      replicaCount: 3
      podMonitor:
        enabled: false
      restartPodsOnConfigMapChange: true
      resources:
        requests:
          cpu: 0.2
          memory: 128Mi
      volumes:
        journal:
          name: journal
          size: 5Gi
          storageClassName: local-path
        ledgers:
          name: ledgers
          size: 5Gi
          storageClassName: local-path
      configData:
        nettyMaxFrameSizeBytes: "10485760"
    
    autorecovery:
      podMonitor:
        enabled: false
      restartPodsOnConfigMapChange: true
      configData:
        BOOKIE_MEM: >
                            -Xms64m -Xmx128m -XX:MaxDirectMemorySize=128m
    
    pulsar_metadata:
      image:
        repository: k3d-registry:5000/apachepulsar/pulsar
    
    broker:
      replicaCount: 2
      podMonitor:
        enabled: false
      restartPodsOnConfigMapChange: true
      resources:
        requests:
          cpu: 0.2
          memory: 256Mi
      configData:
        PULSAR_MEM: >
                            -Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
        managedLedgerDefaultEnsembleSize: "2"
        managedLedgerDefaultWriteQuorum: "2"
        managedLedgerDefaultAckQuorum: "2"
        subscriptionExpirationTimeMinutes: "5"
        systemTopicEnabled: "true"
        topicLevelPoliciesEnabled: "true"
    
    proxy:
      podMonitor:
        enabled: false
    
    kube-prometheus-stack:
      enabled: false
    
  4. Add the Apache Pulsar Helm repository, then install Pulsar using the customized-values-pulsar.yaml configuration:

    helm repo add apache https://pulsar.apache.org/charts
    helm install pulsar apache/pulsar \
      --namespace pulsar \
      --create-namespace \
      --version 3.1.0 \
      -f customized-values-pulsar.yaml
    
  5. Check that Pulsar components are running correctly by inspecting the pods in the pulsar namespace, this may take up to several minutes:

    kubectl -n pulsar get pods
    

    Ensure all the pods are ready or completed before continuing:

    NAME                       READY   STATUS      RESTARTS   AGE
    pulsar-bookie-0            1/1     Running     0          2m11s
    pulsar-bookie-1            1/1     Running     0          2m11s
    pulsar-bookie-2            1/1     Running     0          2m11s
    pulsar-bookie-init-c4n6k   0/1     Completed   0          2m10s
    pulsar-broker-0            1/1     Running     0          2m11s
    pulsar-broker-1            1/1     Running     0          2m11s
    pulsar-pulsar-init-l8v47   0/1     Completed   0          2m10s
    pulsar-recovery-0          1/1     Running     0          2m11s
    pulsar-zookeeper-0         1/1     Running     0          2m11s
    pulsar-zookeeper-1         1/1     Running     0          90s
    pulsar-zookeeper-2         1/1     Running     0          54s