Kubernetes

This guide covers how to get started with Deephaven quickly in a Kubernetes environment. Commands are shown for running in a Unix-like environment.

Note

This installation is intended for trial use only. System defaults are used throughout this document, which are not suitable for many production environments. For complete coverage of the Deephaven Kubernetes installation, see the Kubernetes installation guide.

Prerequisites

Before deploying Deephaven Enterprise with Kubernetes, you need the following prerequisites:

  • A Kubernetes cluster, with a dedicated namespace created for the Deephaven installation.
  • kubectl, docker and helm command line tools.
  • An artifact repository to which Docker images can be stored and from which Kubernetes pods may pull them.
  • Two Deephaven distributable packages, one containing a helm chart and another containing Docker images. This guide uses version 1.20240517.344 as an example, though yours may differ.
  • A TLS webserver certificate and the private key that corresponds to it. The webserver and certificate must meet Deephaven's requirements. The Deephaven installation includes a LoadBalancer service (Envoy) that is the entry point for the application. A DNS entry for the hostname associated with this certificate must be created after the installation.

Start by creating your Kubernetes namespace and setting it to the default if you have not already done so.

kubectl create namespace <your-namespace>
kubectl config set-context --current --namespace <your-namespace>

Unzip the Deephaven helm chart

There are two tar files required to install Deephaven in Kubernetes. They should have been given to you by a Deephaven team member. Place the two files in the same directory: a deephaven-helm package and a deephaven-containers package.

ls *.gz
deephaven-containers-1.20240517.344.tar.gz
deephaven-helm-1.20240517.344.tar.gz

Unpack the deephaven-helm package only, as it contains some scripts as well as a Docker directory tree with Dockerfiles for users who wish to build their own container images. The deephaven-containers package will remain zipped up.

tar -xzf deephaven-helm-1.20240517.344.tar.gz

Push the Deephaven images to your image repository

First, load the images to your local Docker repo.

docker image load -i deephaven-containers-1.20240517.344.tar.gz

Next, push the images to your artifact repository using the pushAll.sh script found in the unzipped deephaven-helm package.

Note

Keep the full path to your artifact repository handy, as you will need it for another step. This example uses the fake image URL my-repo.dev/my-project/images/deephaven-quickstart that you will replace with your own.

./deephaven-helm-1.20240517.344/docker/pushAll.sh --source-tag 1.20240517.344 my-repo.dev/my-project/images/deephaven-quickstart 1.20240517.344

Set up an NFS deployment

The Deephaven deployment needs a read-write-many (RWX) store. While that store is not part of the Deephaven helm chart itself (and you may choose to use an available one in your environment), the helm distribution contains the manifest files to easily create an NFS server for this purpose.

The rest of the commands in this guide must be run from the helm subdirectory of the unpackaged helm distribution.

cd deephaven-helm-1.20240517.344/helm

The default storage class defined in setupTools/nfs-server.yaml is premium-rwo, which is suitable for a GKE environment. For instance, set the storage class to gp2 for EKS, managed-csi for AKS.

Once the storage class is set, apply the files to create the deployment.

kubectl apply -f setupTools/nfs-server.yaml
kubectl apply -f setupTools/nfs-service.yaml

It may take a minute to start up the pod. You can check its status with kubectl get pods. Once it is running, the following commands prepare it for use.

MY_NFS_POD=$(kubectl get pods -l role=deephaven-nfs-server --no-headers -o custom-columns="NAME:.metadata.name")
kubectl cp setupTools/setup-nfs-minimal.sh $MY_NFS_POD:/setup-nfs-minimal.sh
kubectl exec $MY_NFS_POD -- bash -c "export SETUP_NFS_EXPORTS=y && chmod 755 /setup-nfs-minimal.sh && /setup-nfs-minimal.sh"

Install the etcd helm chart

Deephaven depends on a bitnami etcd helm deployment. Set up a deployment named dh-etcd with these commands.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install dh-etcd bitnami/etcd --values setupTools/etcdValues.yaml

Create a Kubernetes secret for the TLS certificate

With the TLS certificate and private key stored as files named tls.crt and tls.key, respectively, run this command to create a deephaven-tls secret from them.

kubectl create secret tls deephaven-tls --cert=tls.crt --key=tls.key

Install the Deephaven helm chart

You can now install the Deephaven helm chart with the command below.

helm upgrade --install dh-quickstart deephaven \
    --set etcd.release="dh-etcd" \
    --set global.storageClass="standard-rwo" \
    --set nfs.pvPrefix="dhqs" \
    --set nfs.server=$(kubectl get svc deephaven-nfs --no-headers -o custom-columns='IP:.spec.clusterIP') \
    --set image.repositoryUrl="my-repo.dev/my-project/images/deephaven-quickstart" \
    --set image.tag="1.20240517.344" \
    --set envoyFrontProxyUrl="dh-quickstart.mydomain.com" \
    --set "envoy.serviceAnnotations.networking\.gke\.io/load-balancer-type"="Internal" \
    --debug

Note

  • If you are setting an annotation property with a . in the name with --set, then you must escape them with a \ and enclose the entire value in double quotes.
  • Deephaven's primary point of service is the Envoy service load balancer. You can optionally provide annotation for this service, which can affect how it operates.
  • Omitting annotations can result in your cluster allocating an external IP address for the Envoy service.
  • If you do not use an external IP address, you may need certain firewall rules to access the Envoy service.
  • You can set arbitrary properties on the Envoy service by prefacing the property with envoy.serviceAnnotations.
  • The example command sets a property named networking.gke.io/load-balancer-type. In GKE environments, this results in a non-external IP address for the Envoy service.

You must provide your own values for these properties:

  • etcd.release: The name of the etcd release created earlier.
  • global.storageClass: An appropriate storage class for your Kubernetes environment that allows for auto-provisioning volumes.
  • nfs.pvPrefix: A prefix that will be prepended to pvc and pv objects.
  • nfs.server: The IP address of the NFS server. The example below uses a command to find it dynamically and can be left as is.
  • image.repositoryUrl: The URL of the container registry that holds the Deephaven Docker images.
  • image.tag: The tag for the Deephaven Docker images to use. This tag was set when the images were pushed.
  • envoyFrontProxyUrl: The hostname/DNS entry used for your Deephaven cluster. It should match the hostname in the TLS certificate created earlier.

The installation takes a couple of minutes. You can see progress by tailing the log output of the install job with the command:

kubectl logs -f job/dh-quickstart-pre-release-hook

Create a DNS entry for the application

A DNS entry is required for the hostname referenced by the TLS certificate. It should use the IP address listed under the EXTERNAL-IP column after running kubectl get svc envoy. The process for creating a DNS entry varies depending on your Kubernetes provider and/or infrastructure.

The following command creates a DNS entry in a GCP environment.

MY_ENVOY_IP=$(kubectl get svc envoy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
gcloud dns record-sets create dh-quickstart.mydomain.com --ttl=300 --type=A --zone=myzone --rrdatas=${MY_ENVOY_IP}

Set a password for the admin user

The following block contains two shell commands:

  • The first command opens a shell in the management shell pod.
  • The second command runs the dhconfig command to set the password. It sets it to adminpw1, but you are encouraged to provide your own that is more secure.
kubectl exec -it deploy/management-shell -- bash
/usr/illumon/latest/bin/dhconfig acl users set-password --name iris --hashed-password $(openssl passwd -apr1 adminpw1)

Log in

You can now access the application at a URL similar to https://yourhost.domain.com:8000/iriside, using the hostname that matches your webserver TLS certificate.

Note

The URL example above uses port 8000. Use port 8000 for servers with Envoy and port 8123 for servers without Envoy.

img