GoodData.CN Installation Configuration Options
The installation of GoodData.CN is configured using a Helm chart. You can download the most up-to-date version of the GoodData Helm chart from https://charts.gooddata.com using Helm. You can review the default Helm chart values on artifacthub.
Before installing the GoodData.CN Helm chart, you should create a YAML file to override the default Helm chart values with configurations tailored to your specific installation requirements.
Save Your YAML Configuration File
We strongly recommend saving the YAML configuration file used during installation. This file will be essential when upgrading GoodData.CN to a newer version.
Common Options
This section provides a non-exhaustive list of common configuration options you may want to consider when installing GoodData.CN.
OIDC Provider
By default, GoodData.CN installs with an internal OpenID Connect (OIDC) identity provider called Dex. For production deployments, we strongly recommend using your own external OIDC identity provider and disabling Dex with the following configuration:
deployDexIdP: false
You can configure GoodData.CN to use your external OIDC provider after the Helm chart installation by adjusting the organization settings.
If you decide to keep Dex enabled, you’ll need to configure the ingress host where the identity provider will be running:
dex:
ingress:
authHost: 'auth.company.com'
tls:
authSecretName: gooddata-cn-auth-tls
# annotations:
# cert-manager.io/cluster-issuer: letsencrypt-production
The example annotation shows how to integrate with cert-manager to provision TLS certificates automatically. If you are not using automated certificate provisioning, ensure that the gooddata-cn-auth-tls
secret already exists and contains a certificate and key valid for the auth.company.com
DNS name. Also, ensure that auth.company.com
resolves to the Load Balancer in front of your ingress controller.
General recommendation: For production installations you should disable Dex and use external OIDC identity provider instead.
PostgreSQL
By default, GoodData.CN installs with an internal PostgreSQL database. However, we recommend using an external PostgreSQL service for production environments. To configure this, use the following snippet:
deployPostgresHA: false
service:
postgres:
# specify hostname of your remote postgresql server
host: postgres.database.example.com
port: 5432
username: postgres #If you use Azure Database for PostgreSQL, the username must contain the hostname e.g. postgres@gooddata-cn-pg.
password: secretpassword123
# existingSecret: ''
Optionally, you can create a Kubernetes secret with the key postgresql-password
containing the password for the postgres user. Specify the name of this secret in existingSecret
instead of using the password
key.
General recommendation: Disable the default PostgreSQL database and use an external one.
Redis
GoodData.CN can be configured to use an external Redis service for caching. To do this, update your configuration as follows:
deployRedisHA: false
service:
redis:
# specify list of your redis servers
hosts:
- redis.cache.example.com
port: 6379
clusterMode: false
# If your Redis service has authentication enabled, uncomment and declare the password
# password: <REDIS_PASSWORD>
# existingSecret: ''
You may also create a Kubernetes secret with the key redis-password
containing the Redis password. Specify the name of this secret in existingSecret
instead of using the password
key.
General recommendation: Using the default internal Redis is generally sufficient for most cases.
Storage
If you have the option to access AWS S3, configure storage using the following snippet:
exportController:
fileStorageBaseUrl: 's3://<bucket>.s3-<region>.amazonaws.com/someprefix'
s3:
# -- AWS access key id of IAM account with access to S3 bucket
accessKey: ''
# -- AWS secret access key of IAM account with access to S3 bucket
secretKey: ''
# -- you can specify existing secret with cloud credentials instead.
# It must contain keys `access-key-id` and `secret-access-key`
existingSecret: ''
quiver:
durableStorageType: S3
s3DurableStorage:
s3Bucket: my-bucket
s3BucketPrefix: someprefix
s3Region: us-east-2
# s3AccessKey: AKIAxxxxxxx
# s3SecretKey: xxxxxxxxxxx
# authType: aws_tokens
There are three ways to provide AWS credentials:
- Use
accessKey
andsecretKey
. - Create a Kubernetes secret with
access-key-id
andsecret-access-key
, and specify its name inexistingSecret
. - (AWS deployments only): Set up IAM Role for gooddata-cn ServiceAccount with a policy that allows access to the S3 bucket, eliminating the need for credentials.
We recommend setting up a lifecycle policy to automatically delete objects older than one day from the bucket.
Alternatively, if S3 cannot be used, you can attach a volume with the ReadWriteMany access mode. You’ll need a StorageClass that supports this access mode, configured as follows:
exportController:
# -- Base url for export file storage. Must be a local directory
fileStorageBaseUrl: '/tmp/exports'
fsExportStorage:
# -- External storage class name providing ReadWriteMany accessMode
storageClassName: ""
# -- Size of the export storage volume.
pvcRequestStorageSize: "1Gi"
quiver:
durableStorageType: FS
fsDurableStorage:
storageClassName: efs-csi
Note that there is no automatic cleanup job for old files in this setup, so you should implement your own.
The size of the persistent volume for FlexCache durable storage is set in resultCache.totalCacheLimit
(default: 32 GiB).
General recommendation: Configuring storage is mandatory.
Metadata Encryption
Sensitive data, such as credentials for data sources, should be encrypted. You will need to provide a keyset where encryption keys will be stored:
metadataApi:
encryptor:
existingSecret: gdcn-encryption
Keep the encryption keyset in a secure location. If the keyset is lost, you will not be able to decrypt metadata values.
General recommendation: We strongly recommend using metadata encryption.
License Key
The license key obtained from our sales representative must be stored in a Kubernetes secret:
kubectl -n gooddata-cn create secret generic gooddata-cn-license \
--from-literal=license=key/eyJhY2NvdW50I...a-very-long-text...GfMjaRJZcg==
The license key must be provided as a single line without any line breaks:
license:
existingSecret: gooddata-cn-license
General recommendation: Providing a license key is mandatory.
Others
Here are some other configuration options you may find useful:
image.repositoryPrefix
andimage.dockerhubPrefix
allow you to change the container registry host to your private registry, useful for air-gapped deployments.podMonitor
creates PodMonitor objects for Prometheus if set totrue
.platform_limits
lets you adjust various data limits.serviceAccount.annotations
is required if using IAM Roles for Service Account (IRSA).networkPolicy
creates NetworkPolicy resources to limit pod access if set totrue
.ingress.ingressClassName
specifies the IngressClass name for your ingress controller, allowing you to choose which controller to use if multiple are available.podDisruptionBudget
limits the number of concurrent disruptions your application can experience, enhancing availability while allowing the cluster administrator to manage cluster nodes.resultCache.totalCacheLimit
adjusts the cache used by Quiver (default: 32GB).replicaCount
allows you to adjust the number of pod replicas. Reducing this to 1 can save resources during testing, but it is not recommended for production environments.
Examples
Here is an example of a finalized GoodData.CN installation Helm chart from the Install on Azure guide:
license:
existingSecret: $LICENSE_KEY_SECRET_NAME
deployPostgresHA: false
deployDexIdP: false
service:
postgres:
host: $POSTGRESQL_HOSTNAME
port: $POSTGRESQL_PORT
username: $POSTGRESQL_ADMIN_NAME
existingSecret: $POSTGRESQL_CREDENTIALS_SECRET_NAME
podDisruptionBudget:
maxUnavailable: 1
metadataApi:
encryptor:
existingSecret: $GD_ENCRYPTION_KEYSET_SECRET
resultCache:
totalCacheLimit: 10Gi
quiver:
fsDatasourceFsStorage:
storageSize: 10Gi
storageClassName: azureblob-nfs-premium
datasourceFs:
storageType: FS
durableStorageType: FS
fsDurableStorage:
storageClassName: azureblob-nfs-premium
exportController:
fsExportStorage:
storageClassName: azureblob-nfs-premium
pvcRequestStorageSize: 10Gi
For a different example, see the .yaml file in Perform Local Installation.