Multiple server deployment

In this section, we will show how to deploy the Deephaven components in a multi-server environment. We will deploy:

  1. One server combining a Data Import Server with an Infrastructure Server.
  2. Two Query Servers.

The solution will also depend on a storage volume that needs to be available to mount from each of the Deephaven servers.

img

Prerequisites

  1. A storage layer (e.g., NFS Server) with exported volumes available to the deployed Deephaven servers.
  • Deephaven ships with a schema using the namespace DbInternal which contains query performance data among other data. After the Deephaven software install, there will be directories in /db/Intraday/DbInternal and /db/Systems/DbInternal.
  • In this deployment, we will mount a NFS volume for the DbInternal historical data and use that to demonstrate the steps involved to provide historical data volumes for any namespace.
  1. Three servers or VMs with at least the minimum physical or virtual hardware resources.

Prepare the storage layer (NFS Server)

  1. Assure the Deephaven users and groups are able to read and write to any NFS exports.
  2. Create a data volume on the storage server to hold the Historical data.
  • This will be the first historical volume for the 'DbInternal' namespace so we will call it 'dbinternal-0'.
  • DbInternal historical does not use very much space. 10g should be sufficient.
  1. Export the dbinternal-0 data volume.

Deploy the Deephaven Servers

  1. Provision three servers using the procedures outlined in the Deephaven Installation guide. Note: You will need to substitute the hardware sizing in this guide for those provided in the Installation Guide.
  2. Install the Deephaven Software on each Server. When installing Deephaven, there are two packages to install: the Deephaven Database package and Deephaven Configuration package. Your Deephaven account representative will provide you with the latest versions of these two packages. To install the software, you will first need to copy the packages onto your provisioned Deephaven Linux host. Once the packages have been copied to the host, you should SSH onto the server and run the following commands to install the Deephaven Database and the Deephaven Configuration packages:
DH_VERSION="1.20231218.432"
# valid JAVA_VERSION is 8, 11 or 17
JAVA_VERSION=17
sudo yum localinstall deephaven-enterprise-<DH_VERSION>-<JAVA_VERSION>-1.rpm -y

The installation includes a default set of configuration files and sample data for a basic Deephaven installation.

  1. Install MySql Java Connector Software: Deephaven can use MySql (mariadb) to store authentication and database ACL information. This requires the mysql-connector-java JAR to be installed into /etc/sysconfig/illumon.d/java_lib:
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.28.tar.gz
tar -xvzf mysql-connector-java-8.0.28.tar.gz
cd mysql-connector-java-8.0.28
sudo cp mysql-connector-java-8.0.28-bin.jar /etc/sysconfig/illumon.d/java_lib/

Configure Deephaven software on each server

Setup the Historical Partition on each server

  1. Verify the DbInternal historical mounts: ls -l /db/Systems/DbInternal/
  2. Two directories should exist: WritablePartitions and Partitions and be owned by the user dbmerge. If not, you can create them and set the proper permissions.
  3. Mount the DbInternal-0 NFS volume on the provided Partitions/0 sub-directory:
sudo mount -t nfs <nfsserver address>:/dbinternal-0 /db/Systems/DbInternal/Partitions/0
df -h /db/Systems/DbInternal/Partitions/0
  1. Verify the link from WritablePartitions/0 to the Parititions/0 mount exists:
    ls -al /db/Systems/DbInternal/WritablePartitions
    df -h /db/Systems/DbInternal/WritablePartitions/0
    
    If the symbolic link does not exist, create it as follows:
    sudo ln -s /db/Systems/DbInternal/Partitions/0 /db/Systems/DbInternal/WritablePartitions/0
    

Update the iris-common.prop on each server

The default configuration file comes configured with localhost for all the services. Update the properties by replacing all localhost values with the ip or FQDN of the data import/infrastructure server.

Update the following properties in /etc/sysconfig/illumon.d/resources/iris-common.prop:

intraday.server.host.1=<INFRASTRUCTURE_SERVER_IP_1>

PersistentQueryController.host=<INFRASTRUCTURE_SERVER_IP_1>

authentication.server.list=<INFRASTRUCTURE_SERVER_IP_1>

MysqlDbAclProvider.host=<INFRASTRUCTURE_SERVER_IP_1>

dbaclwriter.host=<INFRASTRUCTURE_SERVER_IP_1>

RemoteUserTableLogger.host=<INFRASTRUCTURE_SERVER_IP_1>

MysqlDbAclProvider.host=<INFRASTRUCTURE_SERVER_IP_1>

Modify the smtp properties to send critical error emails to a defined location. Note these properties must be able to connect to a valid smtp server. If you don't have one, localhost should work for most installations:

smtp.mx.domain=localhost

critEmail=root@localhost

For each query server IP addresses or FQDNs modify the following property and add any additional properties beyond the first one. There should be one for each query server:

iris.db.1.host=<QUERY_SERVER_IP_1>

iris.db.1.classPushList=

iris.db.2.host=<QUERY_SERVER_IP_2>

iris.db.2.classPushList=

Adjust the total count property for the total number of query servers:

iris.db.nservers=2

Update the query_servers.txt

All query servers that are made available to end-users to run queries need to be listed in the /etc/sysconfig/illumon.d/resources/query_servers.txt file.

The default setting is localhost. Replace the contents of this file with a list of the IP addresses or FQDNs of the servers you designated as the query servers one per line.

For example:

echo <QUERY_SERVER_IP_1> \
    > /etc/sysconfig/illumon.d/resources/query_servers.txt
echo <QUERY_SERVER_IP_2> \
   >> /etc/sysconfig/illumon.d/resources/query_servers.txt
cat /etc/sysconfig/illumon.d/resources/query_servers.txt

Custom configuration for Data Import/Infrastructure Server

Configure Monit on Data Import/Infrastructure Server

The Deephaven Monit configurations are located in /etc/sysconfig/illumon.d/monit/.

Each Deephaven process has its own Monit configuration file. Any configuration with a .conf file extension will be loaded by Monit and started. To disable an individual process, change the file extension to anything else.

We use .disabled to easily recognize which services are not under Monit control.

  1. Disable the db_query_server. Run the following commands:
cd /etc/sysconfig/illumon.d/monit
sudo mv 03-db_query.conf 03-db_query.conf.disabled
  1. Restart Monit:
sudo service monit restart
sudo monit reload
sudo monit restart all
  1. To check the state of the Deephaven processes run:
sudo monit summary

The output should look something like this:

The Monit daemon 5.14 uptime: 0m
Process 'client_update_service'     Running
Process 'tailer1'                   Running
Process 'iris_controller'           Running
Process 'db_query_server'           Running
Process 'db_ltds'                   Running
Process 'db_dis'                    Running
Process 'db_acl_write_server'       Running
Process 'authentication_server'     Running
System '<ServerHost>' Running

Configure query servers

Configure Monit on query servers

  1. To disable all processes, except the db_query_server process, run the following commands:
cd /etc/sysconfig/illumon.d/monit

sudo mv 02-authentication_server.conf 02-authentication_server.conf.disabled

sudo mv 02-db_acl_write_server.conf 02-db_acl_write_server.conf.disabled

sudo mv 03-db_dis.conf 03-db_dis.conf.disabled

sudo mv 03-db_ltds.conf 03-db_ltds.conf.disabled

sudo mv 03-iris_controller.conf 03-iris_controller.conf.disabled

sudo mv 03-tailer1.conf 03-tailer1.conf.disabled
  1. Restart Monit:
sudo service monit restart

sudo monit reload
  1. To check the state of the Deephaven processes run:
sudo monit summary

The output should look similar to the following:

The Monit daemon 5.14 uptime: 3h 24m
Process 'db_query_server'           Running
System '<ServerHost>' Running

Final steps

Congratulations. Deephaven is now running on three servers with two servers serving queries for end users.

The next steps are to set up nightly merges of Intraday data to the Historical data volume mounted and for users to connect to Deephaven using the Deephaven Console.

Nightly merge jobs

To merge the query performance Intraday data to the Historical data volume mounted, run the merge command on the infrastructure server:

sudo runuser -s /bin/bash dbmerge -c "/usr/illumon/latest/bin/db_merge_import_base.sh 2 DbInternalAllFeeds root@localhost `date +%Y-%m-%d`"

This command should be added as a nightly cron job on the infrastructure server:

cat > /etc/cron.d/illumon-mergeDbInternal <<EOT
#!/bin/bash
#
# Dependencies:
#   1) db_dailymergeDXFeed.xml
#        make sure each 'feed' in the feed list is set up in the file in step 2
#   2) db_mergeDXFeedQuoteStockSchema.xml
#
10 0 * * 0-6su - dbmerge -c "/usr/illumon/latest/bin/db_merge_import_base.sh 2 DbInternalAllFeeds root@localhost | logger -t dbinternal-merge"
EOT

Deephaven Console installation

To use Deephaven Classic, refer to the Deephaven Launcher installation guide to download and connect remotely from your Windows, Mac or Linux desktop.

End users can now use the Deephaven Console client application to access Deephaven and begin executing queries.

Deephaven ships with a schema using the namespace DbInternal which contains query performance data among other data. This data can be queried from the Deephaven Console as follows:

t1=db.i("DbInternal", "QueryPerformanceLog").where("Date=`" + new Date().format('yyyy-MM-dd') + "`")

t2=db.i("DbInternal", "PersistentQueryStateLog").where("Date=`" + new Date().format('yyyy-MM-dd') + "`")