Can't see zitadel UI after Installation. #6296
Unanswered
chandra-zs
asked this question in
Q&A
Replies: 2 comments 15 replies
-
I encounter the same issue today. Did anyone have solution ? |
Beta Was this translation helpful? Give feedback.
15 replies
-
I faced similar issue - not on Ingress, but on dedicated NGINX (but the pripciple is the same).
NGINX has setup as recomment in the documentation here: https://zitadel.com/docs/self-hosting/manage/reverseproxy/nginx |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have installed zitadel suing bitnami helm. Liveness, readiness probes and startup probes are failing. Below i will provide steps i have followed to install zitadel.
helm install crdb cockroachdb/cockroachdb --version 11.0.1 --set fullnameOverride=crdb -n zitadel
helm upgrade --install my-zitadel zitadel/zitadel --version 5.0.0 -f bitnami-zitadel-values.yaml -n zitadel
bitnami-zitadel-values.yaml file:
zitadel:
The ZITADEL config under configmapConfig is written to a Kubernetes ConfigMap
See all defaults here:
https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
configmapConfig:
TLS:
Enabled: true
Key: " "
Cert: " "
Database:
cockroach:
Host: "crdb-public"
User:
# Username: zitadel
Password: "cha****"
SSL:
Mode: "verify-full"
Admin:
SSL:
Mode: "verify-full"
# Username: root
# Password: "admin@123"
The ZITADEL config under secretConfig is written to a Kubernetes Secret
See all defaults here:
https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
secretConfig:
Reference the name of a secret that contains ZITADEL configuration.
The key should be named "config-yaml".
configSecretName:
ZITADEL uses the masterkey for symmetric encryption.
You can generate it for example with tr -dc A-Za-z0-9 </dev/urandom | head -c 32
masterkey: "bmUXPn2YSBRHWkWc80Cw0aW5dlLV4h4D"
Reference the name of the secret that contains the masterkey. The key should be named "masterkey".
Note: Either zitadel.masterkey or zitadel.masterkeySecretName must be set
masterkeySecretName: ""
The root CA Certificate needed for establishing secure database connections
dbSslRootCrt: ""
The Secret containing the root CA Certificate at key ca.crt needed for establishing secure database connections
dbSslRootCrtSecret: "crdb-ca-secret"
The Secret containing the client CA Certificate and key at tls.crt and tls.key needed for establishing secure database connections
dbSslClientCrtSecret: "crdb-client-secret"
replicaCount: 3
image:
repository: ghcr.io/zitadel/zitadel
pullPolicy: IfNotPresent
Overrides the image tag whose default is the chart appVersion.
tag: ""
chownImage:
repository: alpine
pullPolicy: IfNotPresent
tag: "3.11"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
Specifies whether a service account should be created
create: true
Annotations to add to the service account
annotations: {}
The name of the service account to use.
If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
securityContext: {}
Additional environment variables
env:
[]
- name: ZITADEL_DATABASE_POSTGRES_HOST
valueFrom:
secretKeyRef:
name: postgres-pguser-postgres
key: host
service:
type: ClusterIP
port: 8080
protocol: http2
annotations: {}
ingress:
enabled: true
className: nginx
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/issuer: "letsencrypt-prod"
hosts:
tls:
- hosts:
- id.zelarsoft.com
secretName: zitadel-certs-secret
enabled: true
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
initJob:
Once ZITADEL is installed, the initJob can be disabled.
enabled: true
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "1"
resources: {}
activeDeadlineSeconds: 300
extraContainers: []
podAnnotations: {}
setupJob:
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "2"
resources: {}
activeDeadlineSeconds: 300
extraContainers: []
podAnnotations: {}
machinekeyWriterImage:
repository: bitnami/kubectl
tag: ""
readinessProbe:
httpGet:
port: https
scheme: HTTPS
# path: /healthcheck/status.json
httpHeaders:
- name: Host
value: id.zelarsoft.com
initialDelaySeconds: 20
periodSeconds: 10
enabled: true
initialDelaySeconds: 300
periodSeconds: 5
failureThreshold: 3
livenessProbe:
httpGet:
port: https
scheme: HTTPS
# path: /healthcheck/status.json
httpHeaders:
- name: Host
value: id.zelarsoft.com
initialDelaySeconds: 20
periodSeconds: 10
enabled: true
initialDelaySeconds: 200
periodSeconds: 5
failureThreshold: 3
startupProbe:
enabled: false
periodSeconds: 1
#failureThreshold: 30
metrics:
enabled: false
serviceMonitor:
# If true, the chart creates a ServiceMonitor that is compatible with Prometheus Operator
# https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor.
# The Prometheus community Helm chart installs this operator
# https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack
enabled: false
honorLabels: false
honorTimestamps: true
pdb:
enabled: false
these values are used for the PDB and are mutally exclusive
minAvailable: 1
maxUnavailable: 1
annotations: {}
Beta Was this translation helpful? Give feedback.
All reactions