dhconfig configuration tool

Deephaven's dhconfig tool in /usr/illumon/latest/bin simplifies management of the various configurations and services of the system. It is the primary mechanism for managing configuration in Deephaven. This tool handles schema, data routing, properties, service registry configuration, controller configuration, and can display table checkpoint files.

In the text below, /usr/illumon/latest/bin/dhconfig is shortened to dhconfig for clarity, as if /usr/illumon/latest/bin has been added to the shell PATH with PATH=$PATH:/usr/illumon/latest/bin.

Usage:

dhconfig [schemas|properties|routing|dis|checkpoint|serviceregistry|pq|acls] [import|export|list|delete|help|validate|add|show-claims|status|restart|stop|reload|selection-provider|leader|users|groups|publickeys|rows|columns|inputtables] [arguments]

The first argument is a configuration data type:

The second argument is an action, for example:

  • import
    Add or update configuration to the system
  • export
    Export or print configuration
  • list
    Show what configuration exists in the system
  • delete
    Remove configuration from the system
  • validate
    Check a configuration file for errors (i.e., before importing)
  • help
    Print usage information, as specific as possible given the arguments provided

Note

The actions above are not valid for every command. See the documentation for each command for specifics.

dhconfig is designed to support exploration and provide context-sensitive help. Each command will supply context sensitive help. For example, dhconfig help prints top-level help, which includes configuration data types and common actions, and dhconfig schemas help prints usage specific to schemas. All actions, configuration data types, and arguments allow prefixes as long as they are not ambiguous. Help and usage information always prints the full canonical values.

Some common options for the command follow. Note that not all arguments are applicable in all contexts.

ArgumentDescription
--etcdIf this argument is given, the command is executed directly against etcd. Authentication is via file system permissions, and this is only suitable on certain nodes. If omitted, the command is executed through the configuration server. This is useful in contexts when the configuration server might not be running.
--directoryThis specifies an input or output directory where files will be located. It may be included multiple times in some contexts.
--fileThis identifies input files or data items to be included. It may be included multiple times in most contexts. Most commands treat trailing arguments as --file arguments.
--forceThis indicates that data should be overwritten. In most cases, if a file already exists, it won't be updated unless this flag is included. This option can be used to override certain errors.
--verbosePrint some progress messages to stdout, and the full text of exceptions.
--helpThis produces a usage message with detail appropriate to the configuration data type and command if given.

All commands have more specific options. Detailed usage is provided when --helpis set or if the arguments are incomplete or invalid.

Logging

The dhconfig script creates a log file in /var/log/deephaven/misc if the current user has write permission there, and in /tmp if not. Actions are logged to the DbInternal AuditEventLog table.

Authentication

All commands support a common authentication framework.

In general, authentication is not required for operations that don't change the system configuration (i.e., export, list, validate). If authentication options are provided, they are validated whether they are required or not.

If --etcd is specified, authentication is implied by file system permissions and the authentication arguments do not apply. You must run dhconfig as a privileged user when using --etcd.

Authentication ArgumentDescription
--userAuthentication is using the Deephaven user and password provided interactively, and the operation is checked against the groups that apply to that Deephaven user.
--user and --pwfileAuthentication is using the Deephaven user and password file specified.*
--keyThis specifies a private key file identifying the user. The user running the dhconfig command must have read permission on the key file.
property AuthenticationClientManager.defaultPrivateKeyFileThis property is equivalent to --key, if specified on the process command line.

* You can create a password file with this command: echo -n "your-password" | base64 > password.txt.
Be sure to protect the file with appropriate filesystem permissions.

As a convenience, dhconfig attempts to use the default iris private key file if it is readable. Executing with sudo or sudo -u irisadmin automatically authenticates by setting -DAuthenticationClientManager.defaultPrivateKeyFile=/etc/sysconfig/illumon.d/auth/priv-iris.base64.txt.

Configuration data types

properties

The properties (or property) configuration data type works with properties files stored in the system. properties also handles other types of configuration files, such as status-dashboard-defaults.json. The following additional arguments are supported for some actions:

ArgumentDescription
--renameWhen importing from a single file, assign a new name when stored in the system.
--update-on-sameWhen importing a file, update the system even if the file contents are unchanged.
--validateCheck a properties file for errors without importing. This process parses the file to expose syntax errors and checks for common user errors, such as smart quotes and non-breaking spaces that can be inserted via cut-and-paste from formatted sources.
--forceWhen a properties file imported, dhconfig also validates the file as with --validate. The --force option can be used to import a file that has apparent errors. Use with care because this can cause problems later.
Example command lines:

List all properties files in the system:

/usr/illumon/latest/bin/dhconfig properties list

Export all properties files to /tmp:

/usr/illumon/latest/bin/dhconfig properties export --directory /tmp

Export all properties files to /tmp, bypassing configuration service:

/usr/illumon/latest/bin/dhconfig properties export --directory /tmp --etcd

Print iris-environment.prop:

/usr/illumon/latest/bin/dhconfig properties export iris-environment.prop

Print iris-environment.prop to a file with a different name:

/usr/illumon/latest/bin/dhconfig properties export iris-environment.prop > /tmp/my_properties.prop

Export iris-environment.prop and iris-endpoints.prop:

/usr/illumon/latest/bin/dhconfig properties export --directory /tmp iris-environment.prop iris-endpoints.prop
/usr/illumon/latest/bin/dhconfig properties export --directory . --file iris-environment.prop --file iris-endpoints.prop

Note

Import and delete actions require authentication, with sudo, --key, or --user.

Import iris-environment.prop from /tmp:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig properties import /tmp/iris-environment.prop
/usr/illumon/latest/bin/dhconfig properties import /tmp/iris-environment.prop --user iris

Import my_properties.prop as iris-environment.prop:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig properties import --file /tmp/my_properties.prop --rename iris-environment.prop

Import several properties files from /tmp:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig properties import /tmp/p1.prop /tmp/p2.prop /tmp/p3.prop
sudo -u irisadmin /usr/illumon/latest/bin/dhconfig properties import --directory /tmp p1.prop p2.prop p3.prop

Validate a file without importing it:

/usr/illumon/latest/bin/dhconfig properties validate /tmp/my_properties.prop

Delete a properties file:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig properties delete --file typo.prop

routing

The routing (or datarouting) configuration data type is used to work with the data routing configuration. A common procedure is to export the data routing to a file, edit the file, and then import the changes.

Note

Many Deephaven processes respond dynamically to data routing configuration changes.

Sample commands to edit the data routing configuration:

/usr/illumon/latest/bin/dhconfig routing export --file /tmp/routing.yml
vi /tmp/routing.yml
sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing import --file /tmp/routing.yml
Example command lines:

Print the data routing configuration:

/usr/illumon/latest/bin/dhconfig routing export

Export data routing configuration to /tmp/routing.yml:

/usr/illumon/latest/bin/dhconfig routing export --file /tmp/routing.yml
/usr/illumon/latest/bin/dhconfig routing export /tmp/routing.yml

Check /tmp/routing.yml for errors:

/usr/illumon/latest/bin/dhconfig routing validate --file /tmp/routing.yml
/usr/illumon/latest/bin/dhconfig routing validate --file /tmp/routing.yml --verbose

Note

Import actions require authentication, with sudo, --key, or --user.

The authenticated user must be in a group authorized for data routing service changes. The superusers group (iris-superusers by default) always has write permission. Additional groups can be given permission with the following property, which must be visible to the configuration server:

DataRoutingService.writers=group1,group2

Import data routing configuration from /tmp/routing.yml:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing import --file /tmp/routing.yml

Import data routing configuration from /tmp/routing.yml, bypassing configuration server (requires sudo):

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing import --file /tmp/routing.yml --etcd

Note

With dhconfig routing, you cannot validate or import changes to the routing file and additional DIS files simultaneously. Instead, use dhconfig dis with --routing-file.

dis

The dis configuration data type is used to work with Data Import Server (DIS) configurations. The data routing configuration contains DIS configurations in the dataImportServers section (see dhconfig routing and Data routing YAML format). This command manages DIS configurations directly, and is separate from the main configuration. This method is much more convenient than editing the data routing configuration for most additional DIS configurations.

Note

Import, add, and delete actions require authentication.

The authenticated user must be in a group authorized to make data routing service changes. The superusers group (iris-superusers by default) always has write permission. Additional groups can be given permission with the following property, which must be visible to the configuration server:

DataRoutingService.writers=group1,group2

Note

Many Deephaven processes respond dynamically to data routing configuration changes.

Note

The names of separately configured DISes have additional constraints that do not apply to those configured in the main file. Avoid using special characters in DIS names.

The following additional arguments are supported:

import loads one or more DIS configurations from a file or files. If the file contains multiple configurations, they must be separated by --- lines.

ArgumentDescription
--forceImport the data import servers even if configurations with the same names already exist.
--clobberDelete all existing data import server configurations, and replace them with the set given in this command.
--ignore-errorsWith --clobber, set the configuration even if the existing or new configuration has errors. Use with extreme caution.
--routing-file <arg>With --clobber, include the given routing file along with the data import server configurations.
--fileInput file(s) to import. It may be included multiple times in most contexts. Trailing arguments are treated as --file arguments.

add defines and adds a new DIS configuration:

ArgumentDescription
--name <arg>The name of a data import server configuration to add.
--claim <arg>Claim the specified namespace or namespace.tableName for this import server. It may be specified multiple times.
--forceAdd the data import server even if a configuration with this name already exists.

list lists the data import server configurations in the system:

ArgumentDescription
--allList all data import server configurations, including core configurations.
--longInclude claims made by all listed data import servers.

export prints exports the data import server configurations in the system:

ArgumentDescription
--directoryspecify the directory for writing files. If specified, data import server configurations are written to individual files in this directory.
--fileexport the specified configurations to this filename. Optional, defaults to stdout.
--ignore-errorsExport the configuration(s) even when there are parsing errors in the stored configurations. Only valid with --etcd.
--allInclude core data import server configurations in the export. WARNING: This creates a configuration that cannot be imported without editing the main data routing file.

delete removes one or more data import server configurations from the system:

ArgumentDescription
--forceReally delete the indicated configurations. Required for safety.
--ignore-errorsDelete the configuration(s) even if the existing or new configuration has errors. Use with extreme caution.

validate checks a file or files for errors without importing them:

ArgumentDescription
--fileInput file(s) to validate. It may be included multiple times in most contexts. Trailing arguments are treated as --file arguments.
--routing-file <arg>Validate including the given routing file along with the data import server configurations.

show-claims shows existing data import server claims:

ArgumentDescription
--namespaceInclude only the given namespace(s). Multiple may be set, separated by spaces.
--tableInclude only the given table (requires --namespace).
Example command lines:

List separately configured data import servers in routing configuration:

dhconfig dis list

List all configured data import servers in routing configuration (including those defined in the main routing file):

dhconfig dis list --all

Print separately configured data import servers:

dhconfig dis export

Export all separately configured data import servers to a single file:

dhconfig dis export --file /tmp/import_servers.yml

Export all separately configured data import servers to individual files (the directory must exist):

dhconfig dis export --directory /tmp/dises

Export a data import server configuration that is defined in the routing file. All anchors are resolved, and the file is suitable for import:

dhconfig dis export --name disname --file /tmp/disname.yml --all

Verify that the files parse, and will not cause errors when added to the routing configuration:

dhconfig dis validate /tmp/disa.yml /tmp/disb.yml
dhconfig dis validate /tmp/dises/*.yml

Note

Import, add, and delete actions require authentication, with sudo, --key, or --user.

Import data import server configuration(s) from /tmp/import_servers.yml:

dhconfig dis import --file /tmp/import_servers.yml

Import data import server configuration from files in /tmp/import_servers:

dhconfig dis import /tmp/import_servers/*.yml

Import data routing configuration from /tmp/import_servers.yml, bypassing configuration server (requires sudo):

dhconfig dis import --file /tmp/import_servers.yml --etcd

Replace the set of data import server configurations with those in file /tmp/import_servers.yml (requires sudo): This can be used to repair an invalid configuration.

dhconfig dis import /tmp/import_servers.yml --clobber --etcd

Replace the entire data routing configuration with the data routing file /tmp/routing.yml and set of data import server configurations from the file in /tmp/import_servers.yml: This can be used to repair an invalid configuration.

dhconfig dis import --clobber --routing-file /tmp/routing.yml --file /tmp/import_servers.yml

When the existing data routing configuration has errors, replace the entire data routing configuration with the data routing file /tmp/routing.yml and set of data import server configurations from the file in /tmp/import_servers.yml (requires sudo). This can be used to repair a damaged configuration.

Warning

Use with extreme caution because this can create an invalid configuration.

dhconfig dis import --clobber --routing-file /tmp/routing.yml --file /tmp/import_servers.yml --etcd --ignore-errors

Define and import a data import server configuration claiming several namespaces:

dhconfig dis add --name dis_for_AAA --claim AAA1 --claim AAA2

Define and import a data import server configuration claiming several tables:

dhconfig dis add --name dis_for_BBB --claim Namespace1.BBB --claim Namespace2.BBB

Delete data import servers from routing configuration:

dhconfig dis delete --name extra_dis --name extra_dis2 --force
dhconfig dis delete extra_dis extra_dis2 --force

Show existing data import server claims:

dhconfig dis show-claims

Show claims for the namespaces AAA, BBB, and CCC:

dhconfig dis show-claims --namespace AAA BBB CCC

Show claims for the table AAA.BBB:

dhconfig dis show-claims --namespace AAA --table BBB

schemas

The schemas configuration data type is used to manage schemas in the system, with actions import, export, list, and delete. The following additional arguments are supported for some actions:

ArgumentDescription
--namespaceLimit processing to this namespace. Can be specified multiple times.
--namespacesetLimit processing to this namespace set (User or System). Can be specified multiple times.
--forceOverwrite existing schemas when importing
--skip-errorsAttempt to process all eligible schemas, even when errors are encountered. If omitted, processing stops at the first error.
--lenient-validationIt is possible that classes used in a schema are not available when reading a schema. This is a fatal error unless this option is specified. Validation can be disabled entirely by setting the property SchemaXmlParser.disableValidation=true.
--compileListeners in the schema are compiled in the validation step by default, and when this option is specified.
--no-compileDo not compile listeners while importing a schema. This can be used to import schemas with apparent errors. Use with care: if the Data Import Server cannot compile the listeners, data cannot be ingested.
Example command lines:

List all namespaces:

/usr/illumon/latest/bin/dhconfig schemas list --operate-on-namespace

List all schemas in namespace DbInternal:

/usr/illumon/latest/bin/dhconfig schemas list --namespace DbInternal

List all System schemas, bypassing configuration server (requires sudo):

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schemas list --namespaceset System --etcd

Export all schemas in namespace DbInternal to /tmp/schemas:

/usr/illumon/latest/bin/dhconfig schemas export --namespace DbInternal --directory /tmp/schemas

Print a schema to stdout:

/usr/illumon/latest/bin/dhconfig schemas export DbInternal.ProcessEventLog

Note

Import and delete actions require authentication, with sudo, --key, or --user. Deploy a (new) single schema:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schemas import --file /tmp/schemas/LearnDeephaven.StockQuotes.schema

Deploy a (new) single schema, skipping all data validation (use with care):

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schemas import --file /tmp/schemas/LearnDeephaven.StockQuotes.schema --no-compile --lenient

Deploy a (new) single schema directly to etcd, bypassing the configuration server:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schemas import --etcd --file /tmp/schemas/LearnDeephaven.StockQuotes.schema
sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schemas import --etcd /tmp/schemas/LearnDeephaven.StockQuotes.schema

Re-deploy the ExampleNamespace schemas:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schemas import --directory /tmp/schemas --namespace ExampleNamespace --force

Delete MyTable1 and MyTable2 from namespace MyNamespace:

/usr/illumon/latest/bin/dhconfig schemas delete --file MyNamespace.MyTable1 --file MyNamespace.MyTable2 --force --user some-admin-user --pwfile ~/pw.txt
sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schemas delete MyNamespace.MyTable1 MyNamespace.MyTable2 --force

Delete namespace MyNamespace:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schemas delete --operate-on-namespace MyNamespace --force

pq

The pq configuration data type is used to interact with the Persistent Query Controller. You may import, export, delete, stop, and restart queries using the following actions. Authentication is required for most pq actions.

All actions except for reload and leader support the following options:

OptionDescription
-D,--include-non-displayableInclude non-displayable (helper) queries. Non-Displayable queries are excluded if not set
-T,--include-temporaryInclude temporary queries. Temporary queries are excluded if not set
-n,--name <arg>Name of the query to filter. Multiple may be set, separated by spaces
-o,--owner <arg>Owner of the queries to filter. Multiple may be set, separated by spaces
-s,--serial <arg>Serial IDs to operate on. Multiple may be set, separated by spaces. When set, this option overrides all other filtering options

Import queries

the import action imports a set of Persistent Queries from an XML file. This command requires the --file option to specify the input file, and responds to the following options

OptionDescription
-C,--continue-after-errorContinue after an error. Processing halts if not set.
--dry-run <arg>Dry run the command. Specify the basic for local validation or full for local and controller verification.
-f,--file <arg>Name of the file to import from.
--new-owner <arg>New owner to be used on an import instead of the existing query owner.
--override-xml <arg>A file with partial configuration overrides that supersede config values in the full XML file.
-R,--retain-serialRetain the serial of an imported query. Unless --replace-existing is provided imports fail if the serial already exists. When not set, all imported queries are assigned new serials.
--server-override <arg>Update existing server names with new ones (oldName:newName); May be set multiple times, separated by spaces.
-x,--replace-existingWhen --retain-serial is set, imported queries replace any matching serials.

Export queries

The export action exports queries from the system to an XML file.

OptionDescription
-f,--file <arg>Name of the file to import from, export to, or write status to.

Delete queries

The delete action stops and permanently deletes queries from the system.

OptionDescription
--allow-nukeAllow delete to run without any parameters, deleting all visible queries.

Stop queries

The stop action stops the selected queries if they are running.

Note

The stop action is not pertinent to individual replicas or slots, only the entire group.

OptionDescription
-C,--continue-after-errorContinue after an error. Processing halts if not set

Restart queries

The Restart action stops the selected queries if they are running, then starts them all.

OptionDescription
-C,--continue-after-errorContinue after an error. Processing halts if not set
-r,--replica <arg>Replica numbers to restart. Multiple may be set, separated by spaces.

Display query status

The status action displays the current status of queries either as a Table, or as JSON, optionally to a file. When a file is specified JSON is written by default.

OptionDescription
-f,--file <arg>Name of the file to import from, export to, or write status to. JSON is the default format
-j,--as-jsonWrite status as JSON data stream instead of formatted table to stdout
-r,--replica <arg>Replica numbers to display. Multiple may be set, separated by spaces.

Reload Configuration

The reload action commands the Controller to reload its configuration from disk.

Determine Leader Controller

The leader action prints the IP of the leader controller's host and the port on which it is listening. This is useful, for example, to determine which host is running the currently active controller to examine the logs in /var/log/deephaven/controller.

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig pq leader

Server Selection Provider

The active Server Selection Provider provides internal status for debugging in an arbitrary string format. The default Simple Server Selection Provider produces JSON information.

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig pq selection-provider

You can also issue commands to the provider. The default provider allows backend servers to be marked up or down. For example:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig pq selection-provider --command up --node Query_1
sudo -u irisadmin /usr/illumon/latest/bin/dhconfig pq selection-provider --command down --node Query_2

The controller responds with JSON indicating the changes made to each backend query server's state.

You can also issue JSON-formatted commands to the provider using the --command-json argument instead of the --command option. Using a raw JSON argument is necessary for a custom server selection provider that uses different fields than the default simple server selection provider.

serviceregistry

The serviceregistry (or registry) configuration data type is used to work with the service registry configuration.

Example command lines:

Show what configuration exists in the system:

/usr/illumon/latest/bin/dhconfig registry list

Export service registry configuration in system to /tmp/registry.txt:

/usr/illumon/latest/bin/dhconfig registry list --file /tmp/registry.txt

Show all service configurations for one process (db_dis):

/usr/illumon/latest/bin/dhconfig registry list --process db_dis
/usr/illumon/latest/bin/dhconfig registry list db_dis

Show all service configurations for multiple processes (db_dis and db_rta):

/usr/illumon/latest/bin/dhconfig registry list --process db_dis --process db_rta
/usr/illumon/latest/bin/dhconfig registry list db_dis db_rta

Export individual or multiple service registry configurations in system to /tmp/registry.txt:

/usr/illumon/latest/bin/dhconfig registry list --process db_dis --file /tmp/registry.txt
/usr/illumon/latest/bin/dhconfig registry list db_dis --file /tmp/registry.txt
/usr/illumon/latest/bin/dhconfig registry list --process db_dis --process db_rta --file /tmp/registry.txt
/usr/illumon/latest/bin/dhconfig registry list db_dis db_rta --file /tmp/registry.txt

checkpoint

The checkpoint configuration data type is used to examine table checkpoint files on disk. Authentication options do not apply to this command; the executing user must have read access to the files to be examined. Checkpoint files are named table.size by default. File arguments may be the checkpoint file itself, or the directory containing table.size.

The following additional arguments are supported for some actions:

ArgumentDescription
-h, --helpPrint help for a checkpoint command
-csv, --csvPrint checkpoint record summaries in CSV format.
-head, --headerPrint a header with the CSV output.
-v, --verboseAdds some status logging and causes any exceptions to be printed in full.
-e, --continue-on-errorProcess all eligible checkpoint files even if there are errors processing some of them. If not specified, the program halts after the first error.
-f, --fileSpecifies a checkpoint file to print (table.size)
-d,--directorySpecifies a directory, to be searched recursively for checkpoint files.
-l, --listRead arguments (files and directories) from a text file and process them as if they were specified on the command line.
Example command lines:

Display a specific checkpoint file:

/usr/illumon/latest/bin/dhconfig checkpoint list --file /db/path.to.table/table.size
/usr/illumon/latest/bin/dhconfig checkpoint list --file /db/Intraday/DbInternal/ProcessEventLog/some_partition/some_date/ProcessEventLog
/usr/illumon/latest/bin/dhconfig checkpoint list /db/Intraday/DbInternal/ProcessEventLog/some_partition/some_date/ProcessEventLog
/usr/illumon/latest/bin/dhconfig checkpoint list --file /tmp/any.file.name

Display all checkpoint files starting at a directory (recursively searching):

/usr/illumon/latest/bin/dhconfig checkpoint list --directory /db/Intraday/DbInternal/ProcessEventLog
/usr/illumon/latest/bin/dhconfig checkpoint list /db/Intraday/DbInternal/ProcessEventLog

Display a summary of all checkpoint files starting at a directory in CSV format:

/usr/illumon/latest/bin/dhconfig checkpoint list --directory /db/Intraday/DbInternal/ProcessEventLog --csv

Display a summary of all checkpoint files in a list file in CSV format:

find /db/Intraday/DbInternal/ProcessEventLog -name table.size > /tmp/list.txt
/usr/illumon/latest/bin/dhconfig checkpoint list --csv --list /tmp/list.txt
Example output:
$ /usr/illumon/latest/bin/dhconfig checkpoint list /db/Intraday/DbInternal/ProcessEventLog/db_query_server_qa-treasureplus-cluster-query-1_int_illumon_com/2022-07-08/ProcessEventLog/
Checkpoint file '/db/Intraday/DbInternal/ProcessEventLog/db_query_server_qa-treasureplus-cluster-query-1_int_illumon_com/2022-07-08/ProcessEventLog/table.size':

CheckpointRecord[
	version=2,
	TableLocationState[size=7015, lastModificationTime=2022-07-08T17:32:28.691-0400],
	DataFileSizeRecords[
		[name=AuthenticatedUser.dat, size=28060],
		[name=AuthenticatedUser.sym, size=8],
		[name=AuthenticatedUser.sym.bytes, size=4],
		[name=EffectiveUser.dat, size=28060],
		[name=EffectiveUser.sym, size=8],
		[name=EffectiveUser.sym.bytes, size=4],
		[name=Host.dat, size=28060],
		[name=Host.sym, size=8],
		[name=Host.sym.bytes, size=46],
		[name=Level.dat, size=28060],
		[name=Level.sym, size=40],
		[name=Level.sym.bytes, size=24],
		[name=LogEntry.bytes, size=1107823],
		[name=LogEntry.dat, size=56120],
		[name=Process.dat, size=28060],
		[name=Process.sym, size=32],
		[name=Process.sym.bytes, size=32],
		[name=Timestamp.dat, size=56120]
	],
	SourceFileSizeRecord[name=DbInternal.ProcessEventLog.System.db_query_server_qa-treasureplus-cluster-query-1_int_illumon_com.2022-07-08.bin.2022-07-08.170000.000-0400, size=80605],
	ImportState[
		class=class com.illumon.iris.db.tables.dataimport.logtailer.ImportStateRowCounter,
		details=[
			[nRows, 7015]
		]
	]
]
$ /usr/illumon/latest/bin/dhconfig checkpoint list /db/Intraday/DbInternal/ProcessEventLog --csv --header

filename, version, tableSize, lastModificationTime, lastModificationTimeStr, columnCount, sourceFile, sourceBytes
"/db/Intraday/DbInternal/ProcessEventLog/db_query_server_qa-treasureplus-cluster-query-1_int_illumon_com/2022-07-08/ProcessEventLog/table.size", 2, 33919, 1657316070141, 2022-07-08T21:34:30.141Z, "DbInternal.ProcessEventLog.System.db_query_server_qa-treasureplus-cluster-query-1_int_illumon_com.2022-07-08.bin.2022-07-08.210000.000+0000", 351464
"/db/Intraday/DbInternal/ProcessEventLog/db_merge_server_qa-treasureplus-cluster-infra-1_int_illumon_com/2022-07-08/ProcessEventLog/table.size", 2, 4575, 1657316016391, 2022-07-08T21:33:36.391Z, "DbInternal.ProcessEventLog.System.db_merge_server_qa-treasureplus-cluster-infra-1_int_illumon_com.2022-07-08.bin.2022-07-08.210000.000+0000", 3355
"/db/Intraday/DbInternal/ProcessEventLog/qa-treasureplus-cluster-infra-1_int_illumon_com/2022-07-08/ProcessEventLog/table.size", 2, 50146, 1657316070557, 2022-07-08T21:34:30.557Z, "DbInternal.ProcessEventLog.System.qa-treasureplus-cluster-infra-1_int_illumon_com.2022-07-08.bin.2022-07-08.210000.000+0000", 475383
"/db/Intraday/DbInternal/ProcessEventLog/qa-treasureplus-cluster-query-1_int_illumon_com/2022-07-08/ProcessEventLog/table.size", 2, 474510, 1657316070432, 2022-07-08T21:34:30.432Z, "DbInternal.ProcessEventLog.System.qa-treasureplus-cluster-query-1_int_illumon_com.2022-07-08.bin.2022-07-08.210000.000+0000", 2731761
"/db/Intraday/DbInternal/ProcessEventLog/db_query_server_qa-treasureplus-cluster-query-2_int_illumon_com/2022-07-08/ProcessEventLog/table.size", 2, 46555, 1657316070029, 2022-07-08T21:34:30.029Z, "DbInternal.ProcessEventLog.System.db_query_server_qa-treasureplus-cluster-query-2_int_illumon_com.2022-07-08.bin.2022-07-08.210000.000+0000", 925046
"/db/Intraday/DbInternal/ProcessEventLog/qa-treasureplus-cluster-query-2_int_illumon_com/2022-07-08/ProcessEventLog/table.size", 2, 51152, 1657316073318, 2022-07-08T21:34:33.318Z, "DbInternal.ProcessEventLog.System.qa-treasureplus-cluster-query-2_int_illumon_com.2022-07-08.bin.2022-07-08.210000.000+0000", 184474

acls

The acls configuration data type is used to add, remove and modify system access controls such as users, passwords, keys, row and column ACLs. Actions use the DB ACL write server unless the --direct option is set, which executes actions against the backing store directly. Authentication is required for most acls actions.

OptionDescription
--directWrite modifications directly to the backing store, instead of the ACL write service

Export the ACL database

The export action exports the entire ACL database as an XML file that can be imported with the import action

OptionDescription
-f,--file <arg>Name of the file to export to.
--typeType of ACL to export. Allowed values: passwd, columnacls, groupstrategy, inputtableeditors, strategyaccount, tableacls, usergroup, systemuser, publickeys.

Import the ACL database

The import action imports an ACL database into the backing store. You may use the following options to change how imported ACLs interact with existing ones. When no options are set the action fails if there are any conflicting ACLs. Public keys for certain special system users such as iris, merge, and tdcp are protected and will not be overwritten unless --include-protected is set. The -protect-keys option can protect additional users' keys.

OptionDescription
--ignore-existingIgnore existing entries and continue.
-o,--overwriteOverwrite any existing entry.
--replace-allDelete all existing ACLs before importing.
--include-protectedInclude protected keys in the import.
--protect-keys <arg>Users to protect from public key import. System users specified by the AclImporter.protectedUsers are also protected unless --include-protected is set.
--typeType of ACL to import. Allowed values: passwd, columnacls, groupstrategy, inputtableeditors, strategyaccount, tableacls, usergroup, systemuser, publickeys.

Manage Users

The users action lets you manage users of the system. This action has several subcommands:

SubcommandDescription
listList users and groups.
addAdd a user.
deleteDelete a user.
set-passwordSet a user's password.
remove-passwordRemove a user's password.
set-runasMap a Deephaven user to a system user for running PQs.
remove-runasRemove a Deephaven user to system user mapping.
list

List users and groups. The output may be a table, CSV, or a file. You may use the --name and --group options to filter the results.

OptionDescription
-g,--group <arg> (optional)Groups to display. Multiple may be set, separated by spaces.
--include-run-as (optional)Include Deephaven to system user mappings in the output.
-n,--name <arg> (optional)Users to display. Multiple may be set, separated by spaces.
-f,--file <arg> (optional)Name of the file to write output to. Output is written as CSV.
--csv (optional)Write output as CSV.
add

Add a user to Deephaven, optionally with a key and password. You may set the group option to add the new user to additional groups.

See the Using passwords section for help encoding passwords.

OptionDescription
-n,--name <arg>User name to add.
-g,--group <arg> (optional)Groups to add the user to. Multiple may be set, separated by spaces.
--hashed-password <arg> (optional)A hashed new password.
--new-key <arg> (optional)A public key file containing the users public key.
delete

Delete a user from the system.

OptionDescription
-n,--name <arg>User name to delete.
set-password

Set the password for a user. If no option is specified, you are be prompted to enter one in the shell.

Additionally, users may set their own passwords using the --user option without the need for sudo:

/usr/illumon/latest/bin/dhconfig acl user set-password --user myuser

See the Using passwords section for help encoding passwords.

OptionDescription
--hashed-password <arg> (optional)The hashed new password.
-n,--name <arg>User name to create or delete.
remove-password

Remove a user's password.

OptionDescription
-n,--name <arg>User name to create or delete.
set-runas

Map a Deephaven user to a system user for running PQs.

OptionDescription
-n,--name <arg>User name to create a mapping for.
--system-user <arg>The system user to run-as.
remove-runas

Remove a mapping for a Deephaven user to system user.

OptionDescription
-n,--name <arg>User name to remove a mapping for.

Manage Groups

The groups action lets you manage groups. This action has several subcommands:

SubcommandDescription
listList groups.
add-memberAdd users to groups.
remove-memberRemove users from groups.
deleteDelete a group.
list

List groups and their members. The output can be written as a table, CSV, or to a file. You may use the --group option to filter which groups are displayed.

OptionDescription
--csv (optional)Write the output as CSV.
-f,--file <arg> (optional)Name of the file to write output to.
-g,--group <arg> (optional)Groups to display. Multiple may be set, separated by spaces.
add-member

Add users to one or more groups.

OptionDescription
-g,--group <arg>Groups to add to. Multiple may be set, separated by spaces.
-n,--name <arg>Users to add. Multiple may be set, separated by spaces.
remove-member

Remove users from one or more groups.

OptionDescription
-g,--group <arg>Groups to remove from. Multiple may be set, separated by spaces.
-n,--name <arg>Users to remove. Multiple may be set, separated by spaces.
delete

Delete groups from the system.

OptionDescription
-g,--group <arg>Groups to remove from. Multiple may be set, separated by spaces.

Manage Public Keys

The publickeys action lets you manage public keys for users.

SubcommandDescription
listList groups.
addAdd a public key.
deleteRemove a public key.
list

List public keys. The output can be a table, CSV, or a file. When a CSV or file is set, the output contains the entire public key.

You may specify the --include-hash option to include a value you can use with delete to delete a specific key.

OptionDescription
--csv (optional)Write the output as CSV. This prints the full public key.
-f,--file <arg> (optional)Name of the file to write output to.
--include-hash (optional)Include an identifying hash for use with acl publickeys delete.
-n,--name <arg> (optional)User to display keys for. Multiple may be set, separated by spaces.
import

Import a public key. The file set for the --file option may be a Deephaven public key file, private key file, or a file containing a list of keys to be added as user publickey pairs.

If --ignore-existing is set, any keys that already exist are skipped from the import.

OptionDescription
-f,--file <arg>Name of the file containing the key(s) to import.
--ignore-existing (optional)Ignore existing entries and continue.
delete

Delete public keys. You may use the --file option to delete the key form a file, or the --hash option to remove a key based on the hash provided by list.

OptionDescription
-f,--file <arg>Name of the file containing the key to delete (exclusive with --hash).
--hash <arg>The hash of the key to remove, from acl publickeys list --include-hash (exclusive with --hash).

Manage Row, Column, and Input Table ACLs

The actions rows, columns, and inputtable may be used to manage table level ACLs. They each support the following actions:

SubcommandDescription
listList ACLs
addAdd an ACL
deleteRemove an ACL
list

List ACLs. The output may be a table, CSV, or a file. You may use the --group, --namespace, and --table options to filter the results.

OptionDescription
--csv (optional)Write the output as CSV.
-f,--file <arg> (optional)Name of the file to write output to.
-g,--group <arg> (optional)Groups to filter. Multiple may be set, separated by spaces.
-n,--namespace <arg> (optional)Namespace to filter. Multiple may be set, separated by spaces.
-t,--table <arg> (optional)Table names to filter. Multiple may be set, separated by spaces.
add

Add an ACL.

OptionDescription
-g,--group <arg>Group for the ACL.
-n,--namespace <arg>Namespace for the ACL.
-t,--table <arg>Table names for the ACL.
-o,--overwrite (optional)Overwrite any existing entry.
--acl <arg>The ACL to create (applies to rows and columns only).
-c,--columns <arg>Column names of the ACL (applies to columns only).
--can-edit <arg>If the user can edit the input table. May be true or false (applies to inputtables only).
delete

Delete an ACL by the group, namespace and table name.

OptionDescription
-g,--group <arg>Group of the ACL
-n,--namespace <arg>Namespace of the ACL
-t,--table <arg>Table names of the ACL
-c,--columns <arg>Column names of the ACL (applies to columns only)

Using passwords

When you use the --hashed-password option in commands, you must provide a valid apr1 encoded password.

To encode a password for --pwfile, use the base64 tool.

cat $(echo -n secret | base64) > /safe/location/pwfile.b64
/usr/illumon/latest/bin/dhconfig acl user set-password --user username --pwfile /safe/location/pwfile.b64

To encode an apr1 password for --hashed-password, use openssl passwd -apr1 <password>.

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig acl user set-password --name username --hashed-password $(openssl passwd -apr1 <your-password>)

Warning

The examples above leave the plaintext password in shell history. Consider using a file containing the password to be encoded instead.