Multiple server deployment

In this section, we will show how to deploy the Deephaven components in a multi-server environment. We will deploy:

  1. One server combining a Data Import Server with an Infrastructure Server.
  2. Two Query Servers.

The solution will also depend on a storage volume that needs to be available to mount from each of the Deephaven servers.

img

Prerequisites

  1. A storage layer (e.g., NFS Server) with exported volumes available to the deployed Deephaven servers.
  • Deephaven ships with a schema using the namespace DbInternal which contains query performance data among other data. After the Deephaven software install, there will be directories in /db/Intraday/DbInternal and /db/Systems/DbInternal.
  • In this deployment, we will mount a NFS volume for the DbInternal historical data and use that to demonstrate the steps involved to provide historical data volumes for any namespace.
  1. Three servers or VMs with at least the minimum physical or virtual hardware resources.

Prepare the storage layer (NFS Server)

  1. Assure the Deephaven users and groups are able to read and write to any NFS exports.
  2. Create a data volume on the storage server to hold the Historical data.
  • This will be the first historical volume for the 'DbInternal' namespace so we will call it 'dbinternal-0'.
  • DbInternal historical does not use very much space. 10g should be sufficient.
  1. Export the dbinternal-0 data volume.

Deploy the Deephaven Servers

  1. Provision three servers using the procedures outlined in the Deephaven Installation guide. Note: You will need to substitute the hardware sizing in this guide for those provided in the Installation Guide.
  2. Install the Deephaven Software on each Server. When installing Deephaven, there are two packages to install: the Deephaven Database package and Deephaven Configuration package. Your Deephaven account representative will provide you with the latest versions of these two packages. To install the software, you will first need to copy the packages onto your provisioned Deephaven Linux host. Once the packages have been copied to the host, you should SSH onto the server and run the following commands to install the Deephaven Database and the Deephaven Configuration packages:
DH_VERSION="1.20240517.344"
# valid JAVA_VERSION is 11 or 17
JAVA_VERSION=17
sudo dnf install deephaven-enterprise-<DH_VERSION>-<JAVA_VERSION>-1.rpm -y

The installation includes a default set of configuration files and sample data for a basic Deephaven installation.

  1. Install MySql Java Connector Software: Deephaven can use MySql (mariadb) to store authentication and database ACL information. This requires the mysql-connector-java JAR to be installed into /etc/sysconfig/illumon.d/java_lib:
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.28.tar.gz
tar -xvzf mysql-connector-java-8.0.28.tar.gz
cd mysql-connector-java-8.0.28
sudo cp mysql-connector-java-8.0.28-bin.jar /etc/sysconfig/illumon.d/java_lib/

Setup the Historical Partition on each server

  1. Verify the DbInternal historical mounts: ls -l /db/Systems/DbInternal/
  2. Two directories should exist: WritablePartitions and Partitions and be owned by the user dbmerge. If not, you can create them and set the proper permissions.
  3. Mount the DbInternal-0 NFS volume on the provided Partitions/0 sub-directory:
sudo mount -t nfs <nfsserver address>:/dbinternal-0 /db/Systems/DbInternal/Partitions/0
df -h /db/Systems/DbInternal/Partitions/0
  1. Verify the link from WritablePartitions/0 to the Parititions/0 mount exists:
    ls -al /db/Systems/DbInternal/WritablePartitions
    df -h /db/Systems/DbInternal/WritablePartitions/0
    
    If the symbolic link does not exist, create it as follows:
    sudo ln -s /db/Systems/DbInternal/Partitions/0 /db/Systems/DbInternal/WritablePartitions/0
    

Final steps

Congratulations. Deephaven is now running on three servers with two servers serving queries for end users.

The next steps are to set up nightly merges of Intraday data to the Historical data volume mounted and for users to connect to Deephaven using the Deephaven Console.

Nightly merge jobs

To merge the query performance Intraday data to the Historical data volume mounted, run the merge command on the infrastructure server:

sudo runuser -s /bin/bash dbmerge -c "/usr/illumon/latest/bin/db_merge_import_base.sh 2 DbInternalAllFeeds root@localhost `date +%Y-%m-%d`"

This command should be added as a nightly cron job on the infrastructure server:

cat > /etc/cron.d/illumon-mergeDbInternal <<EOT
#!/bin/bash
#
# Dependencies:
#   1) db_dailymergeDXFeed.xml
#        make sure each 'feed' in the feed list is set up in the file in step 2
#   2) db_mergeDXFeedQuoteStockSchema.xml
#
10 0 * * 0-6su - dbmerge -c "/usr/illumon/latest/bin/db_merge_import_base.sh 2 DbInternalAllFeeds root@localhost | logger -t dbinternal-merge"
EOT

Deephaven Console installation

The easiest way to use Deephaven is to launch the web UI. By default, the infrastructure server hosts the UI at https://deephaven-infra-server.example.com:8123/iriside for installations without Envoy and https://deephaven-infra-server.example.com:8000/iriside for installations with Envoy.

To use Deephaven Classic, refer to the Deephaven Launcher installation guide to download and connect remotely from your Windows, Mac or Linux desktop.

End users can now use the Deephaven Console client application to access Deephaven and begin executing queries.

Deephaven ships with a schema using the namespace DbInternal which contains query performance data among other data. This data can be queried from the Deephaven Console as follows:

t1 = db.liveTable("DbInternal", "QueryPerformanceLog").where("Date=`" + new Date().format('yyyy-MM-dd') + "`")

t2 = db.liveTable("DbInternal", "PersistentQueryStateLog").where("Date=`" + new Date().format('yyyy-MM-dd') + "`")