Kubernetes

This guide covers how to get started with Deephaven quickly in a Kubernetes environment. Commands are shown for running in a Unix-like environment.

Note

This installation is intended for trial use only. System defaults are used throughout this document, which are not suitable for many production environments. For complete coverage of the Deephaven Kubernetes installation, see the Kubernetes installation guide.

Prerequisites

Before deploying Deephaven Enterprise with Kubernetes, you need the following prerequisites:

  • A Kubernetes cluster, with a dedicated namespace created for the Deephaven installation.
  • kubectl, docker and helm command line tools.
  • An artifact repository to which Docker images can be stored and from which Kubernetes pods may pull them.
  • Three Deephaven distributable packages. This guide uses version 1.20240517.344 as an example, though yours may differ. The etcd images were originally sourced from bitnami, though Deephaven provides a specific version that does not depend on bitnami's repository.
    • Helm charts and other utilities.
    • Docker images for Deephaven.
    • Docker images for the etcd software used by Deephaven.
  • A TLS webserver certificate and the private key that corresponds to it. The webserver and certificate must meet Deephaven's requirements. The Deephaven installation includes a LoadBalancer service (Envoy) that is the entry point for the application. A DNS entry for the hostname associated with this certificate must be created after the installation.

Set the namespace for your Kubernetes context

If you haven't already, create your Kubernetes namespace and set it to the default for your kubectl context.

kubectl create namespace <your-namespace>
kubectl config set-context --current --namespace <your-namespace>

Unzip the Deephaven helm chart

To install Deephaven in Kubernetes, you need three tar files, which Deephaven will provide. Place the files in the same directory: a deephaven-helm package, a deephaven-containers package, and a bitnami-etcd-containers package.

ls *.gz
deephaven-helm-1.20240517.344.tar.gz
deephaven-containers-1.20240517.344.tar.gz
bitnami-etcd-containers-11.3.6.tar.gz

Unpack the deephaven-helm package only, as it contains some scripts as well as a Docker directory tree with Dockerfiles for users who wish to build their own container images. The other packages will remain zipped up.

tar -xzf deephaven-helm-1.20240517.344.tar.gz

Push the Deephaven and etcd images to your image repository

First, load the images to your local Docker repo.

docker image load -i deephaven-containers-1.20240517.344.tar.gz
docker image load -i bitnami-etcd-containers-11.3.6.tar.gz

Next, push the Deephaven images to your artifact repository using the pushAll.sh script found in the unzipped deephaven-helm package.

Note

Keep the full path to your artifact repository handy, as you will need it for another step. This example uses the fake image URL my-repo.dev/my-project/images/deephaven-quickstart that you will replace with your own.

./deephaven-helm-1.20240517.344/docker/pushAll.sh --repository my-repo.dev/my-project/images/deephaven-quickstart --source-tag 1.20240517.344

Tag and push the etcd images to your artifact repository.

docker tag bitnami/etcd:3.5.21-debian-12-r5 my-repo.dev/my-project/images/bitnami/etcd:3.5.21-debian-12-r5
docker push my-repo.dev/my-project/images/bitnami/etcd:3.5.21-debian-12-r5

docker tag bitnami/os-shell:12-debian-12-r43 my-repo.dev/my-project/images/bitnami/os-shell:12-debian-12-r43
docker push my-repo.dev/my-project/images/bitnami/os-shell:12-debian-12-r43

Change directory

The rest of the commands in this guide are run from the helm subdirectory of the unpackaged helm distribution.

cd deephaven-helm-1.20240517.344/helm

Set up an NFS deployment

The Deephaven deployment needs a read-write-many (RWX) store. While that store is not part of the Deephaven helm chart itself (and you may choose to use an available one in your environment), the helm distribution contains the manifest files to easily create an NFS server for this purpose.

The default storage class is defined as premium-rwo, which is suitable for a GKE environment. If you are not deploying in GKE you must change the storageClassName in setupTools/nfs-server.yaml to a suitable value; for example, gp2 for EKS, or managed-csi for AKS. You can find the available storage classes in your environment with the command kubectl get storageclass.

Once the storage class is set, apply the files to create the deployment.

kubectl apply -f setupTools/nfs-server.yaml
kubectl apply -f setupTools/nfs-service.yaml

It may take a minute to start up the pod. You can check its status with kubectl get pods. Once it is running, the following commands prepare it for use.

MY_NFS_POD=$(kubectl get pods -l role=deephaven-nfs-server --no-headers -o custom-columns="NAME:.metadata.name")
kubectl cp setupTools/setup-nfs-minimal.sh $MY_NFS_POD:/setup-nfs-minimal.sh
kubectl exec $MY_NFS_POD -- bash -c "export SETUP_NFS_EXPORTS=y && chmod 755 /setup-nfs-minimal.sh && /setup-nfs-minimal.sh"

Install the etcd helm chart

The setup-etcd.sh script in the setupTools directory of deephaven-helm package installs the etcd helm chart. The command below is sufficient to create a single-node etcd deployment named dh-etcd without backup snapshots, which is suitable for a trial installation. Note the etcd installation name, as it will be needed later when installing the Deephaven helm chart.

# The repository arg will have /bitnami/etcd:3.5.21-debian-12-r5 appended to it to
# make the complete image url
./setupTools/setup-etcd.sh \
    --repository my-repo.dev/my-project/images \
    --etcd-name dh-etcd \
    --replica-count 1 \
    --no-backup

Deephaven depends on a bitnami etcd helm deployment. Set up a deployment named dh-etcd with these commands.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install dh-etcd bitnami/etcd --values setupTools/etcdValues.yaml --version "11.3.6"

Create a Kubernetes secret for the TLS certificate

With the TLS certificate and private key stored as files named tls.crt and tls.key, respectively, run this command to create a deephaven-tls secret from them.

kubectl create secret tls deephaven-tls --cert=tls.crt --key=tls.key

Install the Deephaven helm chart

You can now install the Deephaven helm chart with a command similar to below, though note that the last --set option for the Envoy annotation networking.gke.io/load-balancer-type is specific to GKE and will not have an effect with other Kubernetes providers.

Note

Deephaven's primary point of service is the Envoy service load balancer. You can optionally provide annotation for this service which can affect how it operates.

  • You can set arbitrary properties on the Envoy service by prefacing the property with envoy.serviceAnnotations.
  • The example command sets a property named networking.gke.io/load-balancer-type. In GKE environments, this results in a non-external IP address for the Envoy service.
  • If you do not use an external IP address, you may need certain firewall rules to access the Envoy service in your Kubernetes cluster.
  • Omitting annotations can result in your cluster allocating an external IP address for the Envoy service.
helm upgrade --install dh-quickstart deephaven \
    --set etcd.release="dh-etcd" \
    --set global.storageClass="standard-rwo" \
    --set nfs.pvPrefix="dhqs" \
    --set nfs.server=$(kubectl get svc deephaven-nfs --no-headers -o custom-columns='IP:.spec.clusterIP') \
    --set image.repositoryUrl="my-repo.dev/my-project/images/deephaven-quickstart" \
    --set image.tag="1.20240517.344" \
    --set envoyFrontProxyUrl="dh-quickstart.mydomain.com" \
    --set "envoy.serviceAnnotations.networking\.gke\.io/load-balancer-type"="Internal" \
    --debug

You must provide your own values for these properties:

  • etcd.release: The name of the etcd release created earlier.
  • global.storageClass: An appropriate storage class for your Kubernetes environment that allows for auto-provisioning volumes.
  • nfs.pvPrefix: A prefix that will be prepended to pvc and pv objects.
  • nfs.server: The IP address of the NFS server. The example below uses a command to find it dynamically and can be left as is.
  • image.repositoryUrl: The URL of the container registry that holds the Deephaven Docker images.
  • image.tag: The tag for the Deephaven Docker images to use. This tag was set when the images were pushed.
  • envoyFrontProxyUrl: The hostname/DNS entry used for your Deephaven cluster. It should match the hostname in the TLS certificate created earlier.

The installation takes a couple of minutes. You can see progress by tailing the log output of the install job with the command:

kubectl logs -f job/dh-quickstart-pre-release-hook

Create a DNS entry for the application

A DNS entry is required for the hostname referenced by the TLS certificate. It should use the IP address listed under the EXTERNAL-IP column after running kubectl get svc envoy. The process for creating a DNS entry varies depending on your Kubernetes provider and/or infrastructure.

The following command example creates a DNS entry in a GCP environment.

MY_ENVOY_IP=$(kubectl get svc envoy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
gcloud dns record-sets create dh-quickstart.mydomain.com --ttl=300 --type=A --zone=myzone --rrdatas=${MY_ENVOY_IP}

Set a password for the admin user

The following block contains two shell commands:

  • The first command opens a shell in the management shell pod.
  • The second command runs the dhconfig command to set the password. It sets it to adminpw1, but you are encouraged to provide your own that is more secure.
kubectl exec -it deploy/management-shell -- bash
/usr/illumon/latest/bin/dhconfig acl users set-password --name dh_admin --hashed-password $(openssl passwd -apr1 adminpw1)

Log in

You can now access the application at a URL similar to https://yourhost.domain.com:8000/iriside, using the hostname that matches your webserver TLS certificate.

The Deephaven login screen