Helm Chart Configuration & Dependencies
This guide provides a structured approach to deploying Exivity on Kubernetes using Helm, including all required dependencies and configuration steps.
Key Configuration Options
Below are the most important Helm values to consider for a typical deployment. For the full list and advanced options, always refer to the values.yaml.
Value | Description | Example |
---|---|---|
licence | Exivity license key. Use demo for evaluation, or request a trial/production key. | licence: "demo" |
storage.storageClass | Storage class for all persistent volumes. Must support RWX (e.g., NFS). | storage.storageClass: nfs-client |
ingress.enabled | Enable/disable ingress. | ingress.enabled: true |
ingress.host | Hostname for ingress. | ingress.host: exivity.example.com |
secret.appKey / secret.jwtSecret | Application and JWT secrets. Set for production. | secret.appKey: <your-key> |
postgresql.enabled | Deploy embedded PostgreSQL (true) or use external (false). | postgresql.enabled: false |
postgresql.host | Hostname for external PostgreSQL. | postgresql.host: db.example.com |
rabbitmq.enabled | Deploy embedded RabbitMQ (true) or use external (false). | rabbitmq.enabled: false |
rabbitmq.host | Hostname for external RabbitMQ. | rabbitmq.host: mq.example.com |
service.tag | Image tag for all services. | service.tag: "3.29.3" |
Prerequisites
- Kubernetes cluster (cloud or on-premises)
kubectl
andhelm
installed- Sufficient cluster resources (see System requirements)
- NFSv4-compatible shared storage (required for Exivity)
1. NFS Storage Setup
Exivity requires NFSv4 for shared storage. All Persistent Volumes (PVs) must support the ReadWriteMany
(RWX) access mode. File locking is essential for correct operation and data integrity.
Install NFS Server Provisioner (example)
If you do not have a managed NFS solution, you can deploy an in-cluster NFS provisioner using Helm:
helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/
helm install nfs-server nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner \
--namespace nfs-server \
--create-namespace \
--wait \
--set persistence.enabled=true \
--set persistence.size=5Gi \
--set storageClass.name=nfs-client \
--set storageClass.allowVolumeExpansion=true \
--set 'storageClass.mountOptions[0]=nfsvers=4.2' \
--set 'storageClass.mountOptions[1]=rsize=4096' \
--set 'storageClass.mountOptions[2]=wsize=4096' \
--set 'storageClass.mountOptions[3]=hard' \
--set 'storageClass.mountOptions[4]=retrans=3' \
--set 'storageClass.mountOptions[5]=proto=tcp' \
--set 'storageClass.mountOptions[6]=noatime' \
--set 'storageClass.mountOptions[7]=nodiratime'
Note: Ensure your NFS server and Kubernetes storage class are configured for NFSv4. For cloud providers, see their documentation for managed NFS solutions (e.g., GKE Filestore).
2. Add the Exivity Helm Repository
helm repo add exivity https://charts.exivity.com
helm repo update
3. Install the Exivity Helm Chart
Replace <namespace>
and <release-name>
as needed. Set the storage class to the one created above (e.g., nfs-client
):
helm upgrade --install <release-name> exivity/exivity \
--namespace <namespace> \
--create-namespace \
--wait \
--set storage.storageClass=nfs-client
4. Helm Chart Configuration
- See values.yaml for all available options.
- For custom database or RabbitMQ, see Custom K8s configuration.
- For advanced storage and volume details, see Kubernetes Volumes configuration.
5. NFS Readiness Probe Improvement
Note: As of Exivity v3.29.3, the NFS readiness probe has been replaced with a more reliable solution, improving detection of NFS connectivity issues and overall system stability. See release notes for details.
Important: PostgreSQL for Production
Production Recommendation: Use a Dedicated PostgreSQL Server
- The embedded Bitnami PostgreSQL chart is included for convenience and is suitable for quick start, testing, and some production scenarios.
- However, for most production environments, we recommend using a dedicated PostgreSQL server—either a managed database from your cloud provider or a self-hosted, production-grade PostgreSQL instance—for improved scalability, backup, and high availability.
- If you choose to use the embedded Bitnami PostgreSQL chart in production, ensure you have appropriate backup and monitoring strategies in place.
To use an external database, set postgresql.enabled: false
and configure postgresql.host
, postgresql.port
, and authentication values accordingly.
6. Experimental: Longhorn Storage Support
New in v3.34.0: Exivity now supports using alternative CSI storage providers, such as Longhorn, thanks to the ability to set securityContext
per service. NFS remains our production recommendation, but you can now try Longhorn as an experimental feature.
Using Longhorn (Experimental)
- Install Longhorn on your cluster:
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
-
Reference: Longhorn Helm install docs
-
Set the Exivity Helm value
storage.storageClass
to your Longhorn storage class (usuallylonghorn
). -
Important: When using Longhorn, you must explicitly set the PVC sizes for each service in your
values.yaml
file. Unlike NFS, Longhorn strictly enforces the defined PVC sizes.
Recommended PVC Sizes for Longhorn
storage:
storageClass: longhorn
pvcSizes:
log:
chronos: 5Gi
edify: 5Gi
executor: 5Gi
glass: 5Gi
griffon: 5Gi
horizon: 5Gi
pigeon: 5Gi
proximityApi: 5Gi
proximityCli: 5Gi
transcript: 5Gi
use: 5Gi
config:
etl: 1Gi
griffon: 1Gi
chronos: 1Gi
data:
exported: 30Gi
extracted: 30Gi
import: 30Gi
report: 30Gi
-
NFS vs. Longhorn:
- With NFS, the total available storage is shared across all PVCs, and the individual PVC size is not enforced. Any PVC can use up the full NFS server space.
- With Longhorn, each PVC is strictly limited to its defined size. You must ensure the sizes are sufficient for your workload.
-
Production Note: NFS is still the recommended and fully supported storage solution for Exivity in production. Longhorn support is experimental and should be used for testing or non-critical workloads only.
7. Observability & Monitoring (Kubernetes)
Exivity provides built-in observability for Kubernetes deployments using Prometheus and Grafana. This allows you to monitor service health, NFS storage, and readiness directly in your cluster.
Grafana Dashboard
A ready-to-use Grafana dashboard is provided.
- File:
exivity-health.grafana.json
(download)
How to use:
- Import this JSON file into your Grafana instance (see Grafana import docs).
- The dashboard visualizes Exivity service health, NFS writability, and command status using Prometheus metrics.
Prometheus Alert Rules
A set of Prometheus alert rules is provided for Exivity.
- File:
readiness-probe.rules.yaml
(download)
How to use:
- Add this YAML file to your Prometheus alerting rules configuration.
- Alerts include:
- ServiceDown: Triggers if an Exivity service is down for 10 minutes.
- NfsDirNotWritable: Triggers if an NFS directory is not writable for 10 minutes.
- CommandHealthy: Triggers if a monitored command is unhealthy for 10 minutes.
Requirements
- Prometheus must be scraping the Exivity metrics endpoints.
- Grafana must be connected to your Prometheus data source.
For more details on metrics and alerting, see the files in static/grafana/
and the Prometheus and Grafana documentation.
Troubleshooting & Support
- If you encounter issues with NFS or storage, check the pod logs and ensure your NFS server supports NFSv4 and RWX.
- For further help, contact Exivity support.