DACS integration

The DACS Integration plugin provides a standard method to apply Thomson Reuters DACS entitlements to data provided by the Deephaven System. Once installed and applied to a table, the system will automatically filter it down to the set of items an individual user has permission to view, as well as record usages per user to be used for compliance auditing.

The plugin is composed of three main components:

  • An Authentication Hook, which allows the system to query DACS for an authenticated user's set of permissioned entities.
  • A Filter Generator used to apply DACS permissions on demand to tables requested by users.
  • A set of three system tables under the DACS namespace to record entity mapping, user permissions and per-user entity usage.

Upon login, the plugin will request a particular user's set of permissioned entities (PEs) and record them to a system table Dacs.PermissionMatrix. Once this is complete, and until the user completely logs out of the Deephaven system, this list of PE's will be dynamically maintained from DACS.

After this first step is completed, if the user requests a table that has been connected to a DACS Filter Generator, the plugin will automatically apply a filter to the requested table that will only pass rows for which the user has proper DACS entitlements for. If the user's set of entitlements is changed in DACS, then that table will be re-filtered with the updated permission set, allowing for intraday permission change compliance.

In addition to only passing rows for which a user has entitlements, the plugin will record item usage per user to an additional system table DACS.Usage. This table can then be used to generate daily per-user usage reports to be used for compliance auditing.

Installation

In order to function properly the plugin must be installed on each host that is part of a Deephaven cluster. Installation may be performed from one of two package options, RPM and TAR archive.

RPM installation (Preferred)

Installing the integration from an RPM is the simplest method. Place the RPM on the host it needs to be installed on and run the following command:

sudo dnf install dacs-integration-<version>.x86_64.rpm

TAR installation

The tarball installation supports systems where security policies do not allow for the dnf package manager to be run as sudo, or are otherwise too restrictive to permit installation from an RPM. Note that all of the commands below should be executed as irisadmin (not root). It does not matter if this is done via prefixing them with sudo -u irisadmin or directly as that user.

  1. First, copy the tar to the server using your preferred method (e.g., rsync, scp, or sftp). For example:
    rsync -avz --rsync-path='$([ -d /tmp/dacs-integration/ ] || mkdir -p /tmp/dacs-integration/) && rsync' -e "ssh -A -i [~/.ssh/your_key_here]" /dh/ws1/dacs-integration/build/distributions/Dacs-integration-[version]-Manual.tgz [your_IP_address]:/tmp/dacs-integration/
    
  2. Then ssh into your remote machine: ssh -A -i ~/[.ssh/your_key_here] [your_IP_address]
  3. Next, prepare the installation directory:
    rm -rf /etc/sysconfig/illumon.d/plugins/dacs-integration
    mkdir -p /etc/sysconfig/illumon.d/plugins/dacs-integration
    
  4. Then, extract the tar into that directory:
    tar -xzf dacs-integration-<version>-Manual.tar -C /etc/sysconfig/illumon.d/plugins/dacs-integration
    
    Ensure that Deephaven is properly configured to pull schemas from plugins. Depending on your installation version this can be in one of two files: - /etc/sysconfig/illumon.d/resources/IRIS-CONFIG.prop - /etc/sysconfig/illumon.d/resources/iris-common.prop (Deephaven v1.20190322 or later)
  5. Look for the property SchemaConfig.resourcePath.plugins. If this is present, you do not need to take further action. If it is missing, add the following to the file: SchemaConfig.resourcePath.plugins=/etc/sysconfig/illumon.d/plugins/*/schema/
  6. Next, run the installation configuration scripts:
    /etc/sysconfig/illumon.d/plugins/dacs-integration/bin/reinstall.sh
    

Configuration

There are only a few configuration options required for the integration to function. Note that there are two possible ways properties are defined, depending on your Deephaven Installation.

Older installations may use the "split and include" pattern of configuration. In this variant, there will be several .prop files in /etc/sysconfig/illumon.d/resources. We are only concerned with the following:

/etc/sysconfig/illumon.d/resources/IRIS-CONFIG.prop /etc/sysconfig/illumon.d/resources/iris-authentication-server.prop

Newer installations use the common properties file, where there is a single file (/etc/sysconfig/illumon.d/resources/iris-common.prop) that is internally separated into sections that are relevant for each configuration.

  1. First, the plugin must be configured as an Authentication Hook. These should be added to the authentication server properties as described above.
    • If the current Deephaven Installation is not already configured with an auth hook, set the following property:
    authentication.server.hooks.class=io.deephaven.dacs.auth.DACSAuthHook
    
    • If the installation is already configured with an auth hook, it must be reconfigured as a compound auth hook as follows:
    authentication.server.hooks.class=io.deephaven.dacs.auth.CompoundAuthHook
    dacs.auth.hooks=io.deephaven.dacs.auth.DACSAuthHook,<Other Auth hook>
    
  2. Next, the DACS configuration properties must be set in the global properties:
    dacs.host=<Your Dacs Host>:<Dacs Port>
    dacs.service=<The DACS Service name to use for permissioning>
    dacs.applicationId=<The ApplicationID assigned to Deephaven by DACS>
    dacs.ignoreUsers=iris,root,superuser
    
  3. After Steps 1-2 are completed, you must restart the authentication server to load the updated properties:
    sudo monit restart authentication_server
    
  4. After Step 3, the plugin will begin to record user permission lists upon login. These can be inspected via query using the following command:
    db.liveTable("DACS","PermissionMatrix").where("Date=today()")
    

At this point you may add DACS Filter generators to tables via ACLs or directly in queries.

Tailoring

The DACS system derives entitlements from what is called a DACS Lock. This is a potentially complicated combination of (Service Id, PE List, Condition) tuples that cannot be pulled directly from the DACS System. Instead these DACS Lock objects are provided by the Thomson Reuters streaming APIs when subscriptions are made. Because of this API and registration pattern, the integration plugin must be provided with a mapping of items to be permissioned to their opaque DACS Lock bytes. For this purpose, the integration includes a table schema called DACS.ItemMap.

This schema consists of three columns:

  • Timestamp - a Long column representing an instance in nanoseconds
  • Item - A String representing the item to associate with the DACS Lock (i.e., RIC)
  • LockData - A blob (byte[]) of the DACS Lock bytes provided by the Subscription

The Integration includes a generated logger class io.deephaven.dacs.gen.ItemMapFormat1Logger that may be used to write this table from a Java application. C# and C++ loggers may also be used as long as they conform to the DACS.ItemMap schema.

Using the provided Java logger

If desired, the supplied Java logger class may be used to write this table. This class should not be directly instantiated. It should instead be created using the built in logger factory facilities. The following is such an example in Legacy:

import io.deephaven.dacs.gen.ItemMapFormat1Logger;
import com.illumon.iris.db.util.logging.EventLoggerFactory;

public ItemMapFormat1Logger createLogger(Logger log) {
	final Configuration config = Configuration.getInstance();
	return EventLoggerFactory.createIntradayLogger(config,
		"DACSAuth",
		log,
		ItemMapFormat1Logger.class,
		config.getServerTimezone().getID());

}

public void exampleLog(ItemMapFormat1Logger logger, String item, byte[] lockBytes) {
	logger.log(DBDateTime.now().getNanos(), item, lockBytes);
}

Using this option requires no additional configuration. The created binary log files will be written into /var/log/deephaven/binlogs and will be automatically picked up by the default tailer instance.

Logging From C++ or C#

The included DACS.ItemMap schema includes a definition for a C# logger which can be generated via generate_loggers. To write C++ loggers, refer to the C++ logger documentation, matching the provided schema.

These logger types allow users to place the result binary log files anywhere on the filesystem with any desired file name. In order to guarantee that these binary logs are imported by a tailer, an additional tailer configuration must be set up. Note that it is recommended to use a filename of the format: <Namespace>.<Tablename>.Main.bin.<Date>.

The integration comes packaged with a set of template tailer configuration files that must be tailored to match where the C++ or C# logger will write its binary log files.

  1. First, edit the template tailer configuration file: /etc/sysconfig/illumon.d/plugins/dacs-integration/samples/tailerConfigDACS.xml
  2. Change the attribute logDirectory to the directory where the logger will write the binary log files.
  3. Move this file into /etc/sysconfig/illumon.d/resources.
  4. Edit the included tailer properties file: /etc/sysconfig/illumon.d/plugins/dacs-integration/global/props/tailerdacs.prop
  5. Change the property log.tailer.enabled.dacs=false to true.
  6. Finally, restart the tailer:
    sudo monit restart tailerdacs
    

Attaching filters

Once the configuration and tailoring of the integration is complete, filters can be added to tables to apply entitlements and record usages.

Filters as ACLs

One of the simplest ways to apply entitlements is using table ACLs. The integration contains one ACL class io.deephaven.dacs.acl.DACSFilterGenerator. This class requires a single parameter on construction to select the column that contains the items to be filtered by.

An example ACL to use in the Table ACL editor would be:

new io.deephaven.dacs.acl.DACSFilterGenerator("USym")

This ACL will filter the table it is applied to by checking entitlements against the USym column.

Programmatic filters by query

Table ACLs are appropriate when users are accessing tables from their own queries or a standalone console. When a single query is used to provide data to multiple users, the query writer must apply appropriate ACLs to the tables served by the query to ensure that proper DACS Entitlements are applied to the derived tables.

The following groovy closure can be used as a starting template:

addDACSFilter = { Table t, String filterColumn ->
	def filterProvider = t.getAttribute(Table.ACL_ATTRIBUTE);
	if(filterProvider == null) {
		filterProvider = TableFilterProvider.FACTORY.create(db, t)
	}
	filterProvider.addFilter("allusers", 'new io.deephaven.dacs.acl.DACSFilterGenerator("'+filterColumn+'")')
}

This is applied to tables in the following manner:

addDACSFilter(myDerivedTable, "USym")

Usage and compliance

Many data providers require usage auditing per user during feed access. The integration plugin writes out a table called DACS.Usage which contains usages per user by item of all items that were provided by Deephaven. This table has four columns: Date, Timestamp, User, and Usages.

The Usages column is an array of Strings that can be ungrouped to create a list of all visited items. Below is a simple Legacy query that will produce a set of plaintext files containing all unique usages per user:

usages = db.i("DACS", "Usage").where("Date=currentDateNy()")
usagesByUser = usages.byExternal("User")
		.asTable()
		.ungroup()
		.selectDistinct("Usages")
		.renameColumns("Usage=Usages")
		.asTableMap()

for(Object k : usagesByUser.getKeySet()) {
	final String userName = (String)k;
	final Table t = usagesByUser.get(userName);

	writeCsv(t, "/db/TempFiles/audit/usages."+userName+"."+currentDateNy()+".csv")
}