Install a Podman deployment
Deephaven's Podman-based distribution allows for very quick deployment of a Deephaven Enterprise cluster for test or production purposes. Podman is an open-source container management system that is largely compatible with Docker containers and syntax. The Deephaven Podman deployment can be used for single-node and multi-node installations, as well as with pre-built or custom-built images and deployment tooling.
The Deephaven Podman distribution is supported on Linux systems. MacOS and Windows are not supported at this time. For full listing of supported OS versions (including supported dependency versions), please refer to the Supported Versions page.
Note
The Podman Quickstart provides simplified instructions. This guide covers all details of Deephaven Podman installation, including more complex configurations than are covered in the Quickstart.
Deephaven offers two container-based deployment options: Podman and Kubernetes. Podman was chosen over Docker because it allows containers to run without requiring elevated user privileges and supports systemd, a feature needed in future Deephaven releases. Compared to Kubernetes, Podman provides a simpler deployment process for single or multiple hosts with fewer dependencies. However, Kubernetes excels in dynamic resource allocation and resilience through stateful sets, making it suitable for more complex and robust service management.
Pod user accounts
The Deephaven Podman deployment option supports two account handling configurations:
- Single-user - All processes within the pod run as root, which maps back to the account running the pod on the host system.
- Multi-user - Processes within the pod run under pod-local user and group accounts that match those used in other deployment models, like bare metal and Kubernetes. In this case,
subuid
andsubgid
are used to translate pod accounts and groups to host-level accounts and groups.
The single-user mode is easier to configure but, due to the elevated internal privileges of all processes, is recommended only in environments where all users with access to Deephaven have superuser access. The multi-user mode is more complex to configure, especially when granting Deephaven processes access to external storage. Still, it allows better stratification of user access rights within Deephaven.
Caution
Switching between single-user and multi-user deployments while maintaining configuration is not supported.
Installation process
At a high level, the installation process consists of:
- Configuring prerequisites on one or more host machines, including creating the
VOLUME_BASE_DIR
. - Building images or downloading and loading pre-built images.
- Configuring any additional host-managed volumes and external storage connections for persistent data storage and sharing.
- Running the
start_command.sh
script to start each node of the deployment.
Prerequisites
General prerequisites
For Deephaven Podman deployments in general, you need:
- One or more machines or VMs with Podman installed.
- Each machine should have at least 24GB of RAM dedicated to running Deephaven on Podman. This is the bare minimum required to reliably run a Deephaven instance and execute some small queries for a single user. More memory is strongly recommended to ensure reliability.
- If you use the deployment for larger-scale testing or production, each machine should have 64GB or more of RAM dedicated to running Deephaven on Podman.
- The machine(s) should also be resolvable by DNS.
- The Deephaven Podman distribution package.
- A Web server certificate to use for the installation.
- A pre-built Deephaven Podman image package. If you want to build your own images, refer to the building Podman images section.
To extract the Deephaven Podman distribution archive (update this with the version you are using):
# Extract the podman deployment tar
tar -xf deephaven-podman-1.20240517.344.tar.gz
cd deephaven-podman-1.20240517.344
Set up the VOLUME_BASE_DIR
The Deephaven Podman distribution requires a directory structure on each host to persist data and configuration outside of the pod. For multi-node deployments, parts of this directory structure must be shared between the nodes.
In this example, /container_test
is used as the root of this directory structure. The user running Podman must have permissions to create this directory and its subdirectories.
export VOLUME_BASE_DIR=/container-test
mkdir -pv "$VOLUME_BASE_DIR"/{db-intraday,db-systems,deephaven-tls,deephaven-shared-config,deephaven-etcd}
chcon -vR -u system_u -r object_r -t container_file_t "${VOLUME_BASE_DIR:-/container-test}"/{db-intraday,db-systems,deephaven-tls,deephaven-shared-config,deephaven-etcd}
The path to this directory structure is exported as VOLUME_BASE_DIR
. This environment variable is used in the start_command.sh
examples in this document when starting the cluster.
Important
export VOLUME_BASE_DIR=/<base_directory_for_external_configuration_and_data>
should be added to the user's profile, or some sort of start_command.sh
wrapper script, otherwise it will need to be re-executed each time the user logs in.
User setup requirements
It is strongly recommended that the user running Podman commands be the user that started the login session. In other words, executing sudo su -- <podman_user_name>
and then executing Podman commands as the impersonated user is not recommended and usually will not work.
If the pod(s) will be running after the user logs out, linger will need to be enabled for the podman user:
sudo loginctl enable-linger <uid_of_podman_user>`
SELinux
If you are running on Linux machines with SELinux enabled, you may need sudo
rights on the machines to set the necessary SELinux flags.
You can check whether SELinux is enabled with sestatus
. If SELinux is installed and active, this will report SELinux status: enabled
. If it shows disabled, or the command is not recognized, then SELinux is not in effect, and chcon
statements will not be needed.
ID translation
The UID of the user account running Podman (through the start_command.sh
script) must have appropriate entries in /etc/subuid
and /etc/subgid
. These files are used to configure how user and group accounts in the pod translate to user and group accounts on the host. These are required even when running the single-user variant of the Deephaven Podman deployment.
Architecture
Deephaven's Podman deployment model uses four images:
- dh-base - contains the base Linux system and several prerequisites.
- dh-stage - contains staged-but-not-initialized etcd and Deephaven installations.
- dh-infra - contains the scripts required to initialize a cluster and start the required processes for an infrastructure node.
- dh-queryserver — is also built from
dh-stage
(not fromdh-infra
) and contains the scripts required to start the required processes for a query server node.
The images are used in conjunction with the start_command.sh
script and a directory structure on the host.
The directory structure is based at a path represented by the VOLUME_BASE_DIR
environment variable used in Deephaven Podman scripts. This directory structure allows for shared configuration between nodes in the cluster, and persists configuration and data so Deephaven Podman deployments can be upgraded or redeployed (if needed) without losing their configuration.
The start_command.sh
script validates arguments and creates Podman command lines to manage containers and pods. When it starts the dh-infra
pod for the deployment, the entrypoint of the image's Containerfile
effectively runs an installation or upgrade of Deephaven within the pod. Configuration for the cluster is persisted on the host in volume subdirectories. Various types of Deephaven data can also be persisted in mounted volumes. Because configuration (and, usually, data) are persisted, the containerized deployment can be restarted with different Podman options, while maintaining state, and can also be restarted with different image versions in order to upgrade the deployment.
Volumes, files, and configuration
Volumes
The following directories (inside the containers) are configured as Podman volumes:
Path | Shared across pods? | Pod types | Purpose |
---|---|---|---|
/db/Intraday | NO | Infra only | (Optional) Intraday data. Only mounted on infra pod. The underlying storage for this volume should not be NFS. |
/db/Systems | YES | All (infra/query) | Historical data. This is shared across all pods. The underlying storage for this volume is typically NFS. |
/db/Users | YES | All (infra/query) | (Optional) Direct user table data. This volume may be stored on NFS. |
/db/IntradayUser | NO | Infra only | (Optional) Centrally managed user table data. Only mounted on infra pod. The underlying storage for this volume should not be NFS. |
/deephaven-etcd | NO | Infra only | The etcd data and the etcd server configuration. The /var/lib/etcd and /etc/etcd directories are linked to /deephaven-etcd . Only mounted on infra pod. Note that etcd contains Deephaven schemas, Persistent Queries, access controls, and configuration files (i.e. .prop files). |
/deephaven-client-files | NO | Infra only | Staging files (e.g., on NFS) for clients, so that clients do not need to run DeephavenUpdater themselves. |
/deephaven-shared-config | YES | All (infra/query) | Deephaven configuration files and required keys/certificates. There are links in /etc/sysconfig/deephaven to individual shared configuration directories. See below for additional details. |
/illumon-d-calendars | YES | All (infra/query) | (Optional) Custom calendar files. This should be shared across pods. |
/illumon-d-java_lib | YES* | All (infra/query) | (Optional) Custom JAR files and other files to persist across restarts. *This can optionally be shared across pods, depending on whether all pods should have access to the same contents. |
/source-tls | NO | All (infra/query) | TLS key and certificate (tls.key and tls.crt ) for the host, plus the CA (ca.crt or truststore.p12 /truststore_passphrase ) for all host certificates. |
/var-log-deephaven | NO | All (infra/query) | (Optional) Log files. This volume may be stored on NFS. Note that the binary log directory, /var/log/deephaven/binlogs , is a link to /var-log-deephaven-binlogs . |
/var-log-deephaven-binlogs | NO | All (infra/query) | (Optional) Binary Log files. This volume must not be stored on NFS. |
Additional volumes can be mounted using the --extra-volume
flag to start_command.sh
. Multiple instances of this flag can be used.
Two special volumes can be attached to persist, and possibly share, the contents of ìllumon.db/java-lib
and ìllumon.db/calendars
.
When preparing host directories for these volumes, ensure that the directory and its contents are owned by the user that will be starting the container, as the initialization process will change their ownership to the $DH_ADMIN_USER:$DH_ADMIN_GROUP
that is configured in the container. If SELinux is in use, then the volumes should also have their SELinux labels set with:
sudo chcon -vR -u system_u -r object_r -t container_file_t "${VOLUME_BASE_DIR:?}"
Note
Tailer config XML files for custom tailer configurations can be persisted by adding a volume for ìllumon.d/java.lib
using the --java-lib-volume
start_command.sh
option, and putting the tailer config files into a JAR file using jar cf
. This is necessary because the tailer process will only look for JAR files in the illumon.d/java.lib
path, but it will find other file types contained in JAR files.
Shared configuration
The /etc/sysconfig/deephaven
directory consists of some items that must be shared across nodes in a Deephaven cluster (i.e., pods in a containerized deployment with podman; servers in a non-containerized deployment). In a non-containerized Deephaven installation, the appropriate files are automatically copied between nodes during the
installation process. Since this is not feasible in a containerized deployment, the required shared files are stored in a directory mounted as a volume by all containers in the cluster. This shared configuration volume is mounted as read-write by all containers.
Build images
Prerequisites to build images
- Podman installed.
- A Deephaven product archive.
- A Deephaven installer jar.
- A Deephaven Core+ archive.
- An etcd installation archive. (etcd 3.5.12 or later is recommended)
- Optionally an Envoy installation archive.
- (The version support matrix shows supported Envoy versions.)
- Configuring images with Envoy allows for all traffic between users and the cluster to be over a single port.
- Internet connection or alternate repository configuration so the
podman build
,dnf
, andpip
commands can download needed packages during the build process.
The build_images.sh
script
Below is an example of how to build the container images. The example command
must be run from the extracted deephaven-podman-<version>
directory (e.g. deephaven-podman-1.20240517.245
); this is the directory that contains the build_images.sh
and start_command.sh
scripts.
When running build_images.sh
, you must specify the location of the Deephaven installation artifacts to use. These can be specified via URLs or included in the dh-base/
subdirectory. See dh-base build arguments for more details.
# An example with the required files having been placed in the dh-base directory
├── dh-base
│ ├── Containerfile
│ ├── deephaven-coreplus-0.37.4-1.20240517.344-jdk17.tgz
│ ├── deephaven-enterprise-jdk17-1.20240517.344.tar.gz
│ ├── etcd-v3.5.13-linux-amd64.tar.gz
│ └── Installer-1.20240517.344.jar
If Envoy is configured, the Envoy binary must be placed in the dh-infra/
subdirectory. Envoy requires Rocky 9 or RHEL 9 as the base image family.
# Build and tag the images. The base image can also be specified with the BASE_IMAGE environment variable:
./build_images.sh \
--dh-file deephaven-enterprise-jdk17-1.20240517.245.tar.gz \
--etcd-file etcd-v3.5.13-linux-amd64.tar.gz \
--installer-file Installer-1.20240517.245.jar \
--coreplus-tar-file deephaven-coreplus-0.36.1-1.20240517.245-jdk17.tgz \
--jdk17 \
--python-package python3-devel.x86_64 \
--base-image docker.io/amd64/rockylinux:9.3-minimal
In this example, the prerequisite Deephaven installation archive, Core+ installation archive, etcd installation archive, and Deephaven installer JAR have already been downloaded and copied to the dh_base
directory. This command line also specifies a particular Python package to install, a particular Rocky 8 base image to use, and to use the Java 17 JDK.
The images can be built by running build_images.sh
, which will build the dh-base
/dh-stage
/dh-infra
/dh-query
images and tag them with those same names.
Built images can be viewed with podman image ls
:
[user@podman-host dh-podman]$ podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/dh-queryserver latest cd1b83b47cea 4 days ago 5.81 GB
localhost/dh-infra latest 821738b91c07 4 days ago 5.96 GB
localhost/dh-stage latest 2bf8a0c9312c 4 days ago 5.81 GB
localhost/dh-base latest d091604e94bb 4 days ago 2.29 GB
Other tested base image families are Rocky 9, RHEL 8, and RHEL 9. Using something other than these is not currently supported and may require significant changes to Containerfile
s and dh_init.sh
and related scripts.
Plugins
Plugins can be included in the images by placing the plugin archives into dh-stage/plugins/
. The plugins to be installed must be listed in the plugins manifest file, dh-stage/plugins/plugins.txt
. The SAML plug-in is built into the product itself.
The plugins manifest file contains entries with the following format:
<plugin-target-dir>=<plugin-file>
<plugin-target-dir>=<plugin-file>=<properties-file>
Field | Description |
---|---|
<plugin-target-dir> | The target directory in /etc/sysconfig/deephaven/plugins into which this plugin will be installed. This must match the plugin's expected directory name, as included in the scripts built into the plugin archive (e.g., samlAuth for the DH SAML plugin). |
<plugin-file> | The filename of the plugin tar archive in dh-stage/plugins/ . |
<properties-file> (Optional) | An optional file in dh-stage/plugins/ that includes additional properties to add to the system after installing the plugin. |
If the <properties-file>
is specified, it is appended to iris-environment.prop
after the plugin has been installed. (The properties are not added until after the plugin has been installed to ensure that the plugin's files are available on the classpath.) A plugin's properties file will only be added to the configuration the first time a cluster is initialized; if the container detects an existing configuration during initialization, the properties will not be appended to the configuration. This prevents duplication of the properties appended to iris-environment.prop
.
Lines beginning with #
are treated as comments and are ignored.
External storage
Paths on the host that are connected to volumes can, in turn, be mapped or linked to network storage. NFS should not be used for intraday data (/db/Intraday
and /db/IntradayUser
) or for binary log files such as /var/log/deephaven/binlogs
, but NFS, or a similar shareable network storage solution, is required to share historical data under /db/Systems
across nodes of a cluster, and can also be used to share historical data between multiple Deephaven installations.
Requirements for access to storage
Single-user installation
When running a single-user installation, all access operations originating in the pod are run on the host in the context of the user who ran the start_command.sh
. Inside the pod this user is root
/root
- root user and root default group. For such installations, Deephaven services in the pod have access to any resources at the host that the Podman user has access to. In many cases, full control access will be required for the running user, as Deephaven does things like renaming directories during merge operations and replacing links when rolling log files.
Multi-user installation
In a multi-user installation, separate user and group accounts are used for different services and tasks inside the pod. These users and groups are translated to host-context UIDs and GIDs based on configured subuids
and subgids
on the host.
The translated UIDs and GIDs must be granted permissions to give the pod account access to resources attached to the host. The subuid
and subgid
files have entries for the name of each user who has authority to translate accounts from containers, along with an offset and a range of IDs to allocate.
For example, an entry like podman_user:100000:10000
allows podman_user
to run containers that include their own internal user IDs, with the internal IDs as high as 10000. The translated ID on the host will be the internal UID plus the offset, minus one.
With this configuration, the default Deephaven dbmerge
account, which has the default ID of 9001, will be represented on the host as UID 109000. When attaching volumes that are hosted off the host (e.g., NFS or SAN), check with your storage admin as to how best to grant the needed permissions to your translated user and group accounts. One option to make this easier to manage is creating local accounts on the host to represent the translated IDs.
Linux useradd
and groupadd
commands allow you to specify what UID or GID to use when creating an account. Although this is theoretically possible on OSX systems, support for it under the newer sysadminctl
command is unclear. Check with your Apple support team to create accounts with specific IDs on an OSX system.
Start the cluster
The start_command.sh
script creates a new pod with the required ports exposed, adds a container created from the appropriate image (either dh-infra
or dh-queryserver
), and starts the container. If another pod of the same name already exists, it is stopped and removed before the new pod is created.
Podman itself also has podman pod stop
and podman pod start
commands. These can be used to effectively pause and resume Podman pods, and are relatively quick operations compared to start_command.sh
. These commands may be used as often as needed if you don't want to run the cluster continuously.
For example:
podman pod stop dh-infra-pod
The start_command.sh
script is normally used only when starting a new Deephaven Podman cluster, or when changing start_command.sh
options (such as mounted volumes) or upgrading the cluster. When the start_commands.sh
script starts a pod, the pod's ENTRYPOINT
runs, which invokes dh-init.sh
. This script effectively runs the Deephaven installation/upgrade process for the node; in the case of an infrastructure node, it also runs the installation process for the cluster (setting up shared configuration, etc.).
Important
The start_command.sh
script is meant to be used only when starting a new Deephaven Podman cluster, or when changing start_command.sh
options or upgrading the cluster. Do not use start_command.sh
to re-start an already-configured and working cluster that has been stopped with podman pod stop
. Use podman pod start
for this. podman pod stop
and podman pod start
are the two built-in Podman commands meant for temporarily stopping and resuming pod workloads.
Note
The dig
utility is used to resolve the IPv4 address from the URL name of the Deephaven server (passed as -h
to start_command.sh
). This will fail on systems where dig
is not installed. A work-around to installing dig
is to set the POD_IP
environment variable before calling start_command.sh
:
export POD_IP=<IPv4 address of the Deephaven cluster>
Single-node clusters
Single-node clusters (where only one node runs all the Deephaven services) do not need shared storage and can be started with a single invocation of start_command.sh
.
This example runs from the directory where the start_command.sh
has been extracted:
POD_HOSTNAME= <URL name of the DH server>
./start_command.sh \
-e 8000 \
-h "${POD_HOSTNAME:?}" \
-t infra -T "${VOLUME_BASE_DIR:?}/deephaven-tls/" \
-d \
-n dh-infra \
-s \
-I "${VOLUME_BASE_DIR:?}/db-intraday/" \
-H "${VOLUME_BASE_DIR:?}/db-systems/"
Where the arguments are:
-e 8000
- connect and forward host port 8000 to the pod's Envoy service.-h <URL name of the DH server>
- the name by which you will connect to the server, which will be the same as the name in the Web certificate, such asmydeephaven.myorg.com
.-t infra
- the type of image to run,infra
in this case.-T "${VOLUME_BASE_DIR:?}/deephaven-tls/"
- path to the Web server certificate files.-d
- delete any existing pod(s) running for this container name.-n dh-infra
- the container name, and the base name to use for the main pod (dh-infra-pod
, in this case).-s
- single node Envoy instance, so, only forward-e 8000
into the container.-H "${VOLUME_BASE_DIR:?}/db-systems/"
- mount thedb-systems
host directory to/db/Systems
in the container for persistent storage of historical data.-I"${VOLUME_BASE_DIR:?}/db-intraday/"
- mount thedb-intraday
host directory to/db/Intraday
in the container for persistent storage of intraday data.
Without Envoy, the -e 8000
and -s
options would be omitted.
Multi-node clusters
Multi-node clusters need the "${VOLUME_BASE_DIR:?}"/deephaven-shared-config
path to be available to all nodes of the cluster.
To start a multi-node cluster, you first execute the start_command.sh
script for each query node, and then for the infra node. Query pods wait to initialize until they see a COMPLETE
status value in the cluster_status shared configuration file.
start_command.sh
details
The script takes the following arguments:
Argument | Default | Description |
---|---|---|
-additional-volume <options> | N/A | (Optional) Additional volume specifications. These flags are passed directly to the podman create command as --volume=<options> . You may need to include the :z option to configure SELinux labels appropriately. Example: --additional-volume /opt/my-custom-volume:/custom-volume:z |
-a, --envoy-admin <port> | N/A | (Optional) Enable Envoy admin on specified port within the container. Only valid for "infra" pods where envoy is enabled. |
-d | N/A | (Optional) Whether to delete any existing pod with the same name. (If not specified, an existing stopped pod is restarted.) See -n for controlling container and pod name. |
--calendars-volume <calendars dir> | N/A (/etc/sysconfig/illumon.d/calendars is stored in the container) | Set the location of the volume where customer calendar definitions are stored. |
-e, --envoy <port> | N/A | (Optional) Use Envoy on specified port within the container. Only valid for "infra" pods. |
-h <hostname> | N/A | Hostname for the pod being created. The IP address that <hostname> resolves to must be on this host. |
-i, --internal-ip <ip address> | 127.0.0.1 | (Optional) Internal IPv4 address of the container. The default is 127.0.0.1 (only used for single-node Envoy deployments). |
--java-lib-volume <java_lib dir> | N/A (/etc/sysconfig/illumon.d/java_lib is stored in the container) | Set the location of the volume where customer class path objects are stored. |
-n <container name> | dh-infra /dh-query | (Optional) The container name. Defaults to dh-infra when type is "infra" and dh-query when type is "query". The pod name will be <container-name>-pod . |
--nohup | N/A | (Optional) Use nohup when invoking podman pod start. This can be helpful to prevent the OS from removing the Podman process after the launching user has logged off the host. |
-p | N/A | (Optional) Additional port numbers to expose (e.g., -p 9032 to expose port 9032 for the SAML ACS). Can be specified multiple times. |
-q | N/A | (Optional) Additional query server hostname. Only applies to -t infra . Can be specified multiple times. |
-s, --single-node-envoy | Multi-node | (Optional) Only map Envoy port(s) into the container. This produces a single-node deployment where only the Envoy port(s) are redirected to the container. This also adds the FQDN and internal IP address of the pod to the container's /etc/hosts file. |
-t <infra | query> | N/A | Container type (either "infra" or "query"). |
-C <config volume dir> | "${VOLUME_BASE_DIR:?}"/db-systems | (Optional) The directory (on the podman host) containing shared configuration files. |
-E <etcd dir> | "${VOLUME_BASE_DIR:?}"/deephaven-etcd | (Optional) The directory (on the podman host) containing etcd data. (Note that Deephaven schema, routing, and property files are centralized in etcd and shared across nodes/services). |
-F <client files dir> | N/A | (Optional) The directory (on the Podman host) where the DeephavenUpdater should stage files for clients. |
-H <hist data dir> | "${VOLUME_BASE_DIR:?}"/db-systems | (Optional) The directory (on the Podman host) where historical data is stored. |
-I, --intraday-data-volume <intraday data dir> | N/A (/db/Intraday is stored in the container) | (Optional) The directory (on the Podman host) where intraday data is stored. |
-L <DH logs dir> | N/A (/var/log/deephaven is stored in the container) | (Optional) The directory (on the Podman host) where Deephaven logs are stored. |
-R, --container-registry <registry> | localhost | (Optional) The container registry used when retrieving images to run. |
-T, --tls-volume <TLS dir> | "${VOLUME_BASE_DIR:?}"/deephaven-tls | (Optional) The directory (on the Podman host) containing TLS certificates. |
-U, --users-volume <users data dir> | N/A (/db/Users is stored in the container) | (Optional) The directory (on the Podman host) where direct user table data is stored. |
-W, --intradayuser-volume <intradayuser data dir> | N/A (/db/IntradayUser is stored in the container) | (Optional) The directory (on the Podman host) where centrally managed user table data is stored. |
The start_command.sh
script also reads the following environment variables:
Variable | Default | Description |
---|---|---|
IRIS_ACCT_PASSWORD | N/A` | The password set for the iris account after Deephaven is initialized and started. |
VOLUME_BASE_DIR | /container-test | The base directory from which volumes will be mounted by default. Must be owned by the user starting the containers. For simplicity with SELinux, it is preferable that this be outside the home directory. |
TLS_CERT_FILENAME | tls.crt | The filename (in the TLS volume) containing the certificate for this host. |
TLS_KEY_FILENAME | tls.key | The filename (in the TLS volume) containing the key for this host. |
TLS_CA_FILENAME | ca.crt | The filename (in the TLS volume) containing the CA certificate used to sign certificates for hosts in this cluster. |
DISABLE_SELINUX_CLIENT_FILES | false | If set to true , the :z option will not be set when creating the client files volume. |
DISABLE_SELINUX_ETCD | false | If set to true , the :z option will not be set when creating the etcd volume. |
DISABLE_SELINUX_INTRADAY_DATA | false | If set to true , the :z option will not be set when creating the intraday data volume. |
Changing properties
Most configuration of Deephaven Podman deployments is the same as for traditional deployments. Tasks like editing properties files or routing YAML are accomplished using the dhconfig
tool. A key point to remember is that Deephaven-specific configuration changes do not require restarting the entire system; configuration changes that affect the behavior of a Deephaven service typically require just the service in question to be restarted with dh_monit
.
For changes that affect configuration used by multiple services, it is recommended to /usr/illumon/latest/bin/dh_monit restart all
, and then monitor service states as they restart with watch /usr/illumon/latest/bin/dh_monit summary
.
Most property changes that affect only query workers will be in effect for any workers started after the change has been imported, without any service restarts needed.
Caution
Do not use start_command.sh
to restart a Deephaven Podman deployment when the configuration is in a state that prevents any Deephaven services from starting or staying reliably in a running state. start_command.sh
effectively runs an upgrade of the system. If there are problems with configuration that prevent proper service startup, that upgrade will fail, leaving the cluster in an unusable state.
Add nodes
Nodes can be added by updating the iris-endpoints.prop
file to include the new node. See the configuration changes to define a new pod
section in the appendix for an example of the required configuration changes.
After importing the updated iris-endponts.prop
file, the new node can be used after reloading the Controller process's configuration:
/usr/illumon/latest/bin/iris controller_tool --reload
A new node's certificates must be signed by the same CA provided when creating the infra node, as with the existing nodes.
Appendix
Images
Images can be custom built using the build-images.sh
script, or pre-built images can be downloaded from Deephaven.
dh-base
The dh-base image is built from a base Linux image(e.g. docker.io/redhat/ubi8-minimal
or docker.io/amd64/rockylinux:8.7-minimal
) and contains several installation files (copied in from the dh-infra
directory) as well as prerequisites installed via dnf
.
Base image build arguments
The following build arguments influence how dh-base
is built:
Example --build-arg command | Description |
---|---|
--build-arg BASE_IMAGE=<my-image-name> | Sets the base image from which the dh-base image is built. (Default is docker.io/amd64/rockylinux:8.7-minimal .) |
--build-arg PACKAGE_SOURCE=local | Instructs the build to copy the Deephaven and etcd tar files from a local directory. |
--build-arg PACKAGE_SOURCE=url | Instructs the build to download the Deephaven and etcd tar files from URLs. |
--build-arg DH_TAR_URL=<url-for-deephaven-installation-tar> | Sets the URL to download the Deephaven binaries from (when PACKAGE_SOURCE is url ). |
--build-arg DH_INSTALLER_URL=<url-for-deephaven-installer-jar> | Sets the URL to download the Deephaven installer JAR from (when PACKAGE_SOURCE is url ). |
--build-arg COREPLUS_TAR_URL=<url-for-coreplus-tar> | Sets the URL to download the Core+ tar from (when PACKAGE_SOURCE is url ). |
--build-arg ETCD_TAR_URL=<url-for-etcd-package> | Sets the URL to download etcd from (when PACKAGE_SOURCE is url ). |
--build-arg DH_TAR_FILE=<file-for-deephaven-installation-tar> | Sets the file name (in the dh-base directory) to use as the Deephaven installation archive (when PACKAGE_SOURCE is local ). Example: deephaven-enterprise-jdk11-1.20231218.115.tar.gz . |
--build-arg DH_INSTALLER_FILE=<file-for-deephaven-installer-jar> | Sets the file name (in the dh-base directory) to use as the Deephaven installer JAR (when PACKAGE_SOURCE is local ). Example: Installer-1.20231218.115.jar . |
--build-arg COREPLUS_TAR_FILE=<file-for-coreplus-tar> | Sets the file name (in the dh-base directory) to use as the Core+ tar (when PACKAGE_SOURCE is local ). Example: deephaven-coreplus-0.32.0-1.20231218.115-jdk11.tgz . |
--build-arg ETCD_TAR_FILE=<file-for-etcd-package> | Sets the file name (in the dh-base directory) to use for the etcd installation (when PACKAGE_SOURCE is local ). Example: etcd-v3.5.7-linux-amd64.tar.gz . |
--build-arg JDK_PACKAGE_NAME=<jdk-package-name> | Sets the JDK package to install with dnf . Example: java-11-openjdk-devel . |
--build-arg ADD_EPEL_REPO=<Y or N> | Determines whether the EPEL repo is added to the image. Setting this to N or n disables adding the EPEL repo. All other values are treated as Y . (Default is Y .) |
--build-arg ADD_DEBUG_TOOLS=<Y or N> | Determines whether additional debug packages are added to the image. Setting this to N or n disables adding additional debug packages. All other values are treated as Y . (Default is Y .) |
These arguments can be configured automatically by build_images.sh
.
Base image files
- The
illumon-db-<VERSION>.tar.gz
file is copied to/tmp/illumon-db.tar.gz
. - The
Installer-<VERSION>.jar
file is copied to/tmp/Installer.tar
. - The
etcd-<VERSION>-linux-amd64.tar.gz
file is copied to/tmp/etcd/etcd.tar.gz
.
Packages
The following sets of packages are installed via dnf
/microdnf
:
- Using
microdnf
:dnf
subscription-manager
- The EPEL repo, unless the image is built with
--build-arg ADD_EPEL_REPO=N
. The repo is installed from https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm or https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm, depending on OS major version (8 or 9). - The following Deephaven dependencies:
- For OS major version 8:
redhat-lsb-core
initscripts
monit
wget
unzip
glibc
libgcc
libgomp
libstdc++
bzip2
git
which
openssl
rsync-3.1*
- For OS major version 9:
initscripts
tar
hostname
monit
cronie
ed
wget
unzip
glibc
libgcc
libgomp
libstdc++
zeromq-devel
bzip2
git
which
openssl
rsync-3.*
- For OS major version 8:
sudo
- The JDK. This is determined by the
JDK_PACKAGE_NAME
build argument. (See base image build arguments for more details.) - Helpful packages for debugging from within the container, unless the image is built with
--build-arg ADD_DEBUG_TOOLS=N
:bind-utils
(providesdig
)net-tools
(providesnetstat
)
dh-stage
The dh-stage
image is built from dh-base
and has both Deephaven and etcd installed, but not initialized. The general steps to build dh-stage
are:
- Copy in the script with the common functions for container initialization (
dh-init-common.sh
). - Copy in the common
cluster.cnf.base
file, which contains environment variables that influence installation scripts. This is customized for the initial cluster layout (i.e., the specific infra/query nodes and their hostnames) whendh-init.sh
runs (when the infra container first initializes). The final, customizedcluster.cnf
is stored in shared configuration and used by the other nodes as they initialize. - Update
/etc/sudoers
with the required permissions for the Deephaven user accounts. - Extract the Deephaven tar archive (to
/usr/illumon
and/etc/sysconfig/installer-deephaven
). - Create several required directories and links.
- Create the required user accounts and groups.
- Extract the etcd tar and install etcd to
/usr/bin/
. - Add the DH username variables to
cluster.cnf.base
. - Set the OS name and version string in
cluster.cnf.base
(based on values extracted from/etc/os-release
). - Copy in any plugins (from
dh-stage/plugins/
to/tmp/plugins
). The plugins are extracted and installed during container initialization.
Stage image build arguments
The following build arguments influence how dh-stage
is built:
Example --build-arg command | Description |
---|---|
--build-arg MONIT_USER=root | Sets the account used to run the monit process. Default is root . |
--build-arg MONIT_GROUP=root | Sets the group account used to run the monit process. Default is root . |
--build-arg ADMIN_USER=root | Sets the account used to run admin processes. Default is root . |
--build-arg ADMIN_GROUP=root | Sets the group account used to run admin processes. Default is root . |
--build-arg MERGE_USER=root | Sets the account used to run merge processes. Default is root . |
--build-arg MERGE_GROUP=root | Sets the group account used to run merge processes. Default is root . |
--build-arg QUERY_USER=root | Sets the account used to run query processes. Default is root . |
--build-arg QUERY_GROUP=root | Sets the group account used to run query monit processes. Default is root . |
--build-arg SHARED_GROUP=root | Sets the group account used to obtain access to files shared by the admin, merge, and query users. Default is root . |
--build-arg SHARED_QUERY_GROUP=root | Sets the group account used for per-user workers to be able to run Deephaven workers. Default is root . |
--build-arg ETCD_USER=root | Sets the account used to run the etcd process. Default is root . |
These arguments can be configured automatically by build_images.sh
.
Use --use-non-root-accounts
with build_images.sh
to change the values of these arguments. --use-non-root-accounts
instructs build_images.sh
to check for account name environment variables; any that have not been set will use Deephaven defaults:
# Default users and groups, or values provided by environment variables
monit_user="${DH_MONIT_USER:-irisadmin}"
monit_group="${DH_MONIT_GROUP:-irisadmin}"
admin_user="${DH_ADMIN_USER:-irisadmin}"
admin_group="${DH_ADMIN_GROUP:-irisadmin}"
merge_user="${DH_MERGE_USER:-dbmerge}"
merge_group="${DH_MERGE_GROUP:-dbmerge}"
query_user="${DH_QUERY_USER:-dbquery}"
query_group="${DH_QUERY_GROUP:-dbquery}"
shared_group="${DH_SHARED_GROUP:-dbmergegrp}"
shared_query_group="${DH_SHARED_QUERY_GROUP:-dbquerygrp}"
etcd_user="${DH_ETCD_USER:-etcd}"
For example, if the environment variable DH_MONIT_USER
is set when build_images.sh
is run with --use-non-root-accounts
, the set value will be used for the monit user. If it is not set, then irisadmin
will be used for the monit user.
dh-infra
The dh-infra
container is built from dh-stage
and contains configuration files and scripts required to finalize the Deephaven installation and initialize etcd.
If USE_ENVOY=enabled
, the dh-infra
ContainerFile
expects to copy an Envoy binary named envoy_binary
from the dh-infra
directory. build_images.sh
can be passed a different name for the Envoy binary file, which it will copy to envoy_binary
before building the dh-infra
image. Envoy requires a major version of 9 for the base OS.
Infra image build arguments
The following build argument influences how dh-infra
is built:
Example --build-arg command | Description |
---|---|
--build-arg USE_ENVOY=disabled | Indicates whether to include the Envoy reverse proxy binary. Default is no Envoy. |
This argument can be configured automatically by build_images.sh
.
dh-queryserver
The dh-queryserver
container is built from dh-stage
and contains configuration files and scripts required to start a query server node that uses the configuration initialized by the dh-infra
node.
Image scripts and base configuration files
The following files are included in the dh-infra
image:
Path in dh-infra | Path in container | Purpose |
---|---|---|
bootstrap-properties/cluster.cnf.base | /hook-scripts/cluster.cnf.base | Initial installation configuration file containing environment variables that influence installation scripts. |
bootstrap-properties/iris-endpoints.prop | /hook-scripts/iris-endpoints.prop | Configuration file containing the hostnames/addresses at which the various Deephaven services are available. |
bootstrap-properties/iris-environment.prop | /hook-scripts/iris-environment.prop | Primary Deephaven configuration file, containing miscellaneous properties for the various Deephaven services. |
dh-init.sh | /dh/bin/dh-init.sh | Container entrypoint. Configures Deephaven (including by calling preinstall-hook.sh ) and starts monit , which starts the Deephaven processes. |
preinstall-hook.sh | /dh/bin/preinstall-hook.sh | Deephaven cluster initialization script for containerized environments. |
The following files are included in the dh-queryserver
image:
Path in dh-queryserver | Path in container | Purpose |
---|---|---|
dh-init.sh | /dh/bin/dh-init-queryserver.sh | Container entrypoint. Configures Deephaven (including by calling preinstall-hook-queryserver.sh ) and starts monit , which starts the Deephaven processes. |
preinstall-hook.sh | /dh/bin/preinstall-hook-queryserver.sh | Deephaven host initialization script for query server containers. |
The following files are included in the dh-stage
image (and thus are included in both dh-infra
and dh-queryserver
):
Path in dh-queryserver | Path in container | Purpose |
---|---|---|
bootstrap-properties/cluster.cnf.base | /opt/cluster.cnf.base | Environment file containing variables that influence installation scripts. |
dh-init-common.sh | /dh/bin/dh-init-common.sh | Contains common functions used by both dh-infra and dh-queryserver . |
dh-etcd-common.sh | /dh/bin/dh-etcd-common.sh | Contains a function for starting etcd. |
keystore-prepare.sh | /dh/bin/keystore-prepare.sh | Creates the keystore and truststore required by Deephaven (from the cert files on the /source-tls volume). |
build_images.sh
This script automates the process of calling podman build
for each of the containers (including setting build arguments and tagging the resulting images).
The script takes the following arguments:
Argument | Default | Description |
---|---|---|
--base-image <image> | docker.io/amd64/rockylinux:8.7-minimal | (Optional) The base image to use for the dh-base container. |
--coreplus-tar-file <file> | N/A | (Optional) (One of --coreplus-tar-file or --coreplus-tar-url must be provided.) The path (relative to the dh-base Containerfile ) of the Core+ tar file. |
--coreplus-tar-url <URL> | N/A | (Optional) (One of --coreplus-tar-file or --coreplus-tar-url must be provided.) The URL from which to download the Core+ tar file. |
--dh-file <file> | N/A | (Optional) (One of --dh-file or --dh-url must be provided.) The path (relative to the dh-base Containerfile ) of the Deephaven installation archive. |
--dh-url <URL> | N/A | (Optional) (One of --dh-file or --dh-url must be provided.) The URL from which to download the Deephaven installation archive. |
--disable-debug-tools | false | (Optional) Whether to disable the installation of debug tools in the dh-base image. |
--disable-epel | false | (Optional) Whether to disable the installation of the Extra Packages for Enterprise Linux repository in the dh-base image. |
--envoy-file <file> | N/A | (Optional) (Reuired if --include-envoy is used.) The path (relative to the dh-base Containerfile ) of the Envoy binary file. |
--etcd-file <file> | N/A | (Optional) (One of --etcd-file or --etcd-url must be provided.) The path (relative to the dh-base Containerfile ) of the etcd tar file. |
--etcd-url <URL> | N/A | (Optional) (One of --etcd-file or --etcd-url must be provided.) The URL from which to download the etcd tar file. |
--include_envoy | false | (Optional) Whether to include the Envoy binary in the dh_infra image and configure it to run as a service. This option requires a RHEL 9 or Rocky 9 image as the --base-image , and requires --envoy-file to be set. |
--installer-file <file> | N/A | (Optional) (One of --installer-file or --installer-url must be provided.) The path (relative to the dh-base Containerfile ) of the Deephaven installer JAR. |
--installer-url <URL> | N/A | (Optional) (One of --installer-file or --installer-url must be provided.) The URL from which to download the Deephaven installer JAR. |
--jdk11 | --jdk17 | N/A | (Optional) Which Java JDK version to use. One of these, or --jdk-package , must be specified. |
--jdk-package <JDK package name> | java-11-openjdk-devel for --jdk11 or java-11-openjdk-devel for --jdk17 | The name of the JDK package to install (using DNF) when building the base image. |
--python-package <Python package name> | None | The name of the Python package to install (using DNF) when building the base image. The chosen package must be an available package on the chosen base package. (For example, package names changed between Rocky 8/RHEL 8 and Rocky 9/RHEL 9). |
--use-non-root-accounts | false | (Optional) Whether to use default Deephaven user and groups accounts, or values provided by environment variables, for image processes instead of running all as root (all as root is the default if this argument is omitted). |
-h | --help | N/A | Print usage information and exit. |
Shared configuration directory contents
The following table provides an overview of the subdirectories of /deephaven-shared-config
:
Item | Type | Notes |
---|---|---|
auth | Directory | Directory under /etc/sysconfig/deephaven that is shared across pods. See below for more details. |
cluster-status | File | Contains a string describing the current status of the Podman deployment (e.g., INITIALIZING OR COMPLETE ). This refers to the state of the shared Deephaven configuration (except everything stored in etcd). It is only updated by the infra pod. Query pods will wait to initialize until the cluster status is COMPLETE. If a default password for the iris account is configured, it is only set during cluster initialization. |
dh-config | Directory | Directory under /etc/sysconfig/deephaven that is shared across pods. See below for more details. |
dh_etcd_config.tgz | File | Contains the etcd client configuration files used by Deephaven. This is extracted to /etc/sysconfig/deephaven/etcd on the infra node during cluster initialization (which is later moved to /deephaven-shared-config and replaced with a symlink). When redeploying a cluster (e.g., during an upgrade), this file must not be deleted, as it is re-extracted to regain access to the original etcd database. |
etcd | Directory | Directory under /etc/sysconfig/deephaven that is shared across pods. See below for more details. |
installation | Directory | Contains installation scripts generated from the installer JAR (/tmp/Installer.jar on the infra pod) during initialization. |
trust | Directory | Directory under /etc/sysconfig/deephaven that is shared across pods. See below for more details. |
/etc/sysconfig/deephaven
directory contents
The following table provides an overview of the subdirectories of /etc/sysconfig/deephaven
and whether they are shared across containers in a Podman deployment of Deephaven:
Item | Type | Shared? | Notes |
---|---|---|---|
auth | Directory | YES | Contains truststore links and private keys, including priv-tdcp.base64.txt (which is required to run the TDCP). |
auth-user | Directory | NO | Contains webServices-keystore.p12 , which is not used in the Podman deployment (files under /deephaven-tls are used instead). |
backups | Directory | NO | Contains backup files created by the Deephaven installer. |
dh-config | Directory | YES | Contains the configuration files required for Deephaven services to communicate with the Deephaven configuration server. |
etcd | Directory | YES | Contains the etcd client configuration, which is required for the Deephaven configuration server and command line tools to connect to etcd . |
illumon.confs.<version> | Directory | NO | Contains the Deephaven hostconfig files, which set environment variables used by Deephaven services and utility scripts. |
illumon.confs.latest | Symlink | NO | Link to latest illumon.confs.<version> . |
illumon.d.<version> | Directory | NO | Contains java_lib /hotfixes /integrations /etc. |
illumon.d.latest | Symlink | NO | Link to latest illumon.d . |
illumon.iris.hostconfig | File | NO | Node-specific hostconfig. |
monit | Directory | NO | Contains Monit configuration files for Deephaven services. |
plugins | Directory | NO | Contains Deephaven plugins. |
python | Directory | NO | |
schema | Directory | NO | Deprecated. (Schema files are stored in etcd .) |
trust | Directory | YES | Contains a truststore (generated by Deephaven installation, distinct from the one in /deephaven-tls ) that contains the configuration server's certificate. This is used by processes that must connect to the configuration server (such as the authentication server). |
For reference, here is an example ls
from illumon.d.latest
on a non-containerized Deephaven installation:
$ ls -l illumon.d.latest/
total 12
lrwxrwxrwx. 1 irisadmin irisadmin 29 Jun 9 18:07 auth -> /etc/sysconfig/deephaven/auth
drwx------. 2 irisadmin irisadmin 6 Apr 1 02:33 auth-backup
lrwxrwxrwx. 1 irisadmin irisadmin 34 Apr 1 02:31 auth-user -> /etc/sysconfig/deephaven/auth-user
drwxr-xr-x. 2 irisadmin irisadmin 6 Apr 1 02:31 calendars
drwxr-xr-x. 2 irisadmin irisadmin 6 Apr 1 02:31 chartthemes
drwxr-xr-x. 4 irisadmin irisadmin 4096 Jun 9 18:07 client_update_service
lrwxrwxrwx. 1 irisadmin irisadmin 34 Jun 9 18:07 dh-config -> /etc/sysconfig/deephaven/dh-config
lrwxrwxrwx. 1 irisadmin irisadmin 29 Jun 9 18:07 etcd -> /etc/sysconfig/deephaven/etcd
drwxr-xr-x. 2 irisadmin irisadmin 6 Jun 9 18:07 hotfixes
drwxr-xr-x. 2 irisadmin irisadmin 6 Apr 1 02:31 integrations
drwxr-xr-x. 2 irisadmin irisadmin 124 Apr 28 09:38 java_lib
lrwxrwxrwx. 1 irisadmin irisadmin 30 Jun 9 18:07 monit -> /etc/sysconfig/deephaven/monit
drwxr-xr-x. 3 irisadmin irisadmin 4096 Jun 9 18:07 monit.new
drwxr-xr-x. 2 irisadmin irisadmin 6 Jun 9 18:07 override
drwxrwxr-x. 2 irisadmin dbmergegrp 18 May 1 12:56 plugins
drwxr-xr-x. 2 irisadmin irisadmin 4096 May 28 22:03 resources
lrwxrwxrwx. 1 irisadmin irisadmin 31 Jun 9 18:07 schema -> /etc/sysconfig/deephaven/schema
Note that many of the directories in illumon.d.latest
are simply links back to directories in /etc/sysconfig/deephaven
. Of these directories, those that must be shared in a podman
deployment (i.e., auth
/dh-config
/etcd
/trust
) are stored in a volume mounted at /deephaven-shared-config
, and links are created from /etc/sysconfig/deephaven/<dir>
to the corresponding directory in /deephaven-shared-config
.
Configuration changes to define a new pod
The following two sections demonstrate the 'before and after' of configuration changes to iris-endpoints.prop
to add a pod to a running cluster.
The iris-endpoints.prop
file can be exported by connecting to the infra pod (e.g. podman exec dh-infra -it bash
) and running:
/usr/illumon/lastest/bin/dhconfig properties export -f iris-endponts.prop -d .
After making the requisite changes to the iris-endpoints.prop
file, the updated properties can be reimported with:
/usr/illumon/lastest/bin/dhconfig properties import -f iris-endponts.prop -d .
Example iris-endpoints.prop
before adding a query pod
[service.name=iris_controller|controller_tool|configuration_server] {
iris.db.1.host=rhel8-test-infra.devrel.deephaven.io
iris.db.1.classPushList=
iris.db.1.class=Query
iris.db.2.host=rhel8-test-query.devrel.deephaven.io
iris.db.2.classPushList=
iris.db.2.class=Query
iris.db.3.host=rhel8-test-infra.devrel.deephaven.io
iris.db.3.classPushList=
iris.db.3.port=30002
iris.db.3.class=Merge
iris.db.3.websocket.port=22060
iris.db.4.host=rhel8-test-query.devrel.deephaven.io
iris.db.4.classPushList=
iris.db.4.port=30002
iris.db.4.class=Merge
iris.db.4.websocket.port=22060
iris.db.nservers=4
}
[host=dh-infra-pod] {
RemoteQueryDispatcherParameters.host=rhel8-test-infra.devrel.deephaven.io
}
[host=dh-query1-pod] {
RemoteQueryDispatcherParameters.host=rhel8-test-query.devrel.deephaven.io
}
Example iris-endpoints.prop
after adding a query pod
The example configuration below shows how the above example should be modified to add a new host (in this case, rhel8-test-query2.devrel.deephaven.io
, run as dh-query2-pod
). Note the three critical changes to iris-endpoints.prop
:
-
New dispatchers defined (
iris.db.5
andiris.db.6
). Note that both a query dispatcher and a merge dispatcher are added for the new host (rhel8-test-query2.devrel.deephaven.io
). -
The
iris.db.nservers
property is updated to match the number ofiris.db.<N>
sections. -
An additional host-scoped configuration section is added for the new pod, informing the query dispatcher of its external-facing hostname. If this step is skipped, the dispatcher will be unable to communicate with etcd. This may manifest in errors such as the following:
java.lang.RuntimeException: Failed to get initial discovery watcher update at io.deephaven.enterprise.dispatcher.client.DispatcherClient.init(DispatcherClient.java:193) at io.deephaven.enterprise.dnd.Main.runDispatcherClient(Main.java:427) at io.deephaven.enterprise.dnd.Main.main(Main.java:164)
Additionally, be aware that Persistent Queries are assigned to a server based on the server ID number (e.g. 3
in iris.db.3
). Changing a server (e.g. changing iris.db.3.host=host3.mycompany.net
to iris.db.3.host=host6.mycompany.net
) will cause PQs that were running on host3.mycompany.net
to be assigned to host6.mycompany.net
.
[service.name=iris_controller|controller_tool|configuration_server] {
iris.db.1.host=rhel8-test-infra.devrel.deephaven.io
iris.db.1.classPushList=
iris.db.1.class=Query
iris.db.2.host=rhel8-test-query.devrel.deephaven.io
iris.db.2.classPushList=
iris.db.2.class=Query
iris.db.3.host=rhel8-test-infra.devrel.deephaven.io
iris.db.3.classPushList=
iris.db.3.port=30002
iris.db.3.class=Merge
iris.db.3.websocket.port=22060
iris.db.4.host=rhel8-test-query.devrel.deephaven.io
iris.db.4.classPushList=
iris.db.4.port=30002
iris.db.4.class=Merge
iris.db.4.websocket.port=22060
iris.db.5.host=rhel8-test-query2.devrel.deephaven.io
iris.db.5.classPushList=
iris.db.5.class=Query
iris.db.6.host=rhel8-test-query2.devrel.deephaven.io
iris.db.6.classPushList=
iris.db.6.port=30002
iris.db.6.class=Merge
iris.db.6.websocket.port=22060
iris.db.nservers=6
}
[host=dh-infra-pod] {
RemoteQueryDispatcherParameters.host=rhel8-test-infra.devrel.deephaven.io
}
[host=dh-query1-pod] {
RemoteQueryDispatcherParameters.host=rhel8-test-query.devrel.deephaven.io
}
[host=dh-query2-pod] {
RemoteQueryDispatcherParameters.host=rhel8-test-query2.devrel.deephaven.io
}