Audit Technical Controls
This guide is a hands-on resource for system administrators and security auditors. It provides practical queries and examples for monitoring key security events in Deephaven, such as user logins, session activity, access violations, and configuration changes. Use this guide for day-to-day security monitoring and incident investigation.
Before using this guide, we recommend reviewing the Security and Auditing Overview for foundational concepts and the Hardening Technical Controls guide to ensure your system is securely configured.
Auditing system configuration compliance
This section details how to manually verify that key static configurations, as recommended in the Hardening Technical Controls guide, are correctly implemented on your system. These checks are typically performed periodically or after system changes.
Beyond monitoring real-time security events, a crucial aspect of auditing is verifying that the system's static configurations align with the security recommendations outlined in the Hardening Technical Controls guide. This section provides guidance on how to check some of these key configurations.
Verifying file system permissions
Correct file system permissions are fundamental to protecting sensitive key material and configuration files from unauthorized access or modification. The Hardening Technical Controls guide specifies required ownership and permission modes for several critical paths. You can verify these using standard Linux/Unix command-line tools.
Key files in /etc/sysconfig/deephaven/auth
The private keys used by various Deephaven services for internal authentication are stored in /etc/sysconfig/deephaven/auth/
. These files must have highly restrictive permissions.
Recommended checks:
The Hardening guide recommends that files such as priv-authreconnect.base64.txt
, priv-controllerConsole.base64.txt
, priv-iris.base64.txt
, and priv-tdcp.base64.txt
should be owned by irisadmin:irisadmin
with 400
permissions (-r--------
). The priv-merge.base64.txt
file should be owned by dbmerge:dbmergegrp
with 400
permissions.
Verification procedures:
ls -l /etc/sysconfig/deephaven/auth/priv-authreconnect.base64.txt
ls -l /etc/sysconfig/deephaven/auth/priv-controllerConsole.base64.txt
ls -l /etc/sysconfig/deephaven/auth/priv-iris.base64.txt
ls -l /etc/sysconfig/deephaven/auth/priv-merge.base64.txt
ls -l /etc/sysconfig/deephaven/auth/priv-tdcp.base64.txt
Expected output example (for an irisadmin
owned key):
-r--------. 1 irisadmin irisadmin 256 Jan 1 10:00 /etc/sysconfig/deephaven/auth/priv-iris.base64.txt
Ensure the permissions string starts with -r--------
and the owner/group matches the recommendations.
dsakeys.txt
file
The dsakeys.txt
file, also typically in /etc/sysconfig/deephaven/auth/
, requires similar protection.
Recommended checks:
This file should be owned by irisadmin:irisadmin
with 400
permissions.
Verification procedures:
ls -l /etc/sysconfig/deephaven/auth/dsakeys.txt
Expected output example:
-r--------. 1 irisadmin irisadmin 512 Jan 1 10:00 /etc/sysconfig/deephaven/auth/dsakeys.txt
Truststore files in /etc/sysconfig/deephaven/trust
Truststore files, while not containing private keys, are critical for establishing trusted TLS connections. Their integrity should be maintained.
Recommended checks:
Files like truststore-iris.p12
, truststore-iris.pem
, and truststore_passphrase
(if it exists and is a file) in /etc/sysconfig/deephaven/trust/
are typically owned by irisadmin:irisadmin
with 644
permissions (-rw-r--r--
). This allows the owner to write and everyone to read.
Verification procedures:
ls -l /etc/sysconfig/deephaven/trust/truststore-iris.p12
ls -l /etc/sysconfig/deephaven/trust/truststore-iris.pem
ls -l /etc/sysconfig/deephaven/trust/truststore_passphrase
Expected output example:
-rw-r--r--. 1 irisadmin irisadmin 2048 Jan 1 10:00 /etc/sysconfig/deephaven/trust/truststore-iris.p12
Data path permissions for dbmerge
user
To maintain data integrity and enforce access controls, the dbmerge
user and dbmergegrp
group should have restricted write access.
Recommended checks:
The Hardening guide recommends that the dbmerge
user and dbmergegrp
group should not have write permissions to any data paths other than /db/Users
. Verifying this typically involves checking the permissions of your main data storage locations.
Audit approach:
This check is more environment-specific as data paths can vary. However, you would use ls -ld <path>
on your primary data directories (e.g., /db/Data/
, /db/ParquetData/
) and check their ownership and group write permissions. Ensure that dbmergegrp
does not have write access (w
in the group part of the permission string) to these paths.
Example (checking a hypothetical data path):
ls -ld /db/Data/my_table_data_directory
Look for output where the group is dbmergegrp
and ensure the permission string does not grant group write access (e.g., drwxr-xr-x
is okay, drwxrwxr-x
would be a concern if the group is dbmergegrp
).
Verifying network configuration
Network configurations, including open ports and TLS certificate details, are critical for secure communication. The Hardening Technical Controls guide provides recommendations in these areas.
Checking open ports and Envoy proxy
The Hardening guide recommends enabling the Envoy reverse proxy to limit Deephaven service access to a single port and disabling the Envoy admin page.
Recommended checks:
- Verify that only the intended public-facing ports are open (e.g., the Envoy port, typically 443 for HTTPS or 80 for HTTP if used).
- Confirm that other Deephaven service-specific ports are not directly exposed to untrusted networks and are instead fronted by Envoy.
- Ensure the Envoy admin interface port (if configured, default 9901) is not accessible from untrusted networks or is disabled.
Verification procedures:
Use tools like netstat
, ss
, or nmap
to list listening ports on your Deephaven servers.
To list all TCP listening ports:
sudo netstat -tulnp | grep LISTEN
# or
sudo ss -tulnp | grep LISTEN
Examine the output to ensure only expected ports are listening and accessible according to your network security policies and the Envoy configuration. For example, if Envoy is listening on port 443, you should see that. If the Envoy admin interface is supposed to be disabled, you should not see its default port (e.g., 9901) listening, or it should be restricted to localhost.
To check a specific port from an external machine (if applicable, replace your_deephaven_server_ip
):
nmap -p <port_number> your_deephaven_server_ip
Inspecting TLS certificate details
The Hardening guide recommends ensuring specific names (no wildcards, unless intentionally and securely managed) are used in the Deephaven Web certificate's Subject Alternative Name (SAN) block.
Recommended checks:
- Inspect the TLS certificate served by Envoy (or directly by services if Envoy is not used for a particular endpoint).
- Verify that the Common Name (CN) and Subject Alternative Names (SANs) match the expected hostnames and do not use overly broad wildcards.
Verification procedures:
The openssl s_client
command can be used to connect to a TLS-enabled service and display its certificate.
To inspect a certificate served on your_deephaven_hostname
at port 443
:
openssl s_client -connect your_deephaven_hostname:443 -servername your_deephaven_hostname \< /dev/null 2>/dev/null | openssl x509 -noout -text | grep -iA1 'Subject Alternative Name'
Or, for a more complete view of the certificate:
openssl s_client -connect your_deephaven_hostname:443 -servername your_deephaven_hostname \< /dev/null 2>/dev/null | openssl x509 -noout -text
Review the "Subject Alternative Name" section of the output. Ensure the listed DNS names are appropriate for your deployment. The Hardening guide specifically mentions checking /etc/deephaven/cus-tls/tls.crt
on the server itself, which can also be inspected using:
openssl x509 -in /etc/deephaven/cus-tls/tls.crt -noout -text
Verifying audit log properties
The Hardening Technical Controls guide recommends enabling audit logging for various Deephaven processes to ensure comprehensive event capture. This is typically controlled by specific application properties.
Recommended checks:
Ensure that properties like *.writeDatabaseAuditLogs
are set to true
for all relevant services. The Hardening guide lists the following key properties:
PersistentQueryController.writeDatabaseAuditLogs=true
AuthenticationServer.writeDatabaseAuditLogs=true
ConfigurationServer.writeDatabaseAuditLogs=true
DbAclWriteServer.writeDatabaseAuditLogs=true
DataImportServer.writeDatabaseAuditLogs=true
TableDataCacheProxy.writeDatabaseAuditLogs=true
RemoteQueryProcessor.writeDatabaseAuditLogs=true
RemoteQueryDispatcher.writeDatabaseAuditLogs=true
Verification procedures:
-
Using the Property Inspector: You can use the Deephaven Property Inspector tool to view the effective runtime configuration of services. Consult the Property Inspector documentation for instructions on how to access and query property values for specific services. You would check that each of the
writeDatabaseAuditLogs
properties listed above reports a value oftrue
. -
Checking configuration files: Alternatively, you can inspect the relevant Deephaven configuration files directly on the server(s). These properties are typically set in Java system properties files or service-specific configuration files. The exact file locations can vary based on your deployment setup.
You can use
grep
to search for these properties within your configuration directory (e.g.,/etc/deephaven/
,/etc/sysconfig/deephaven/
, or custom paths).Example
grep
command (adjust path as needed):grep -ERi 'writeDatabaseAuditLogs' /etc/deephaven/ /etc/sysconfig/deephaven/
Review the output to confirm that each relevant service has its
writeDatabaseAuditLogs
property set totrue
. If a property is not explicitly listed, it may default totrue
(the desired state for audit logging). However, relying on default behaviors for critical audit logs is less secure than explicit configuration. The Hardening guide recommends explicit confirmation to ensure audit logging is active.
Ensuring these properties are active is crucial, as the event monitoring queries provided later in this guide (for DbInternal.AuditEventLog
) depend on these logs being generated.
Successful login
Source in Deephaven
Authentication server log - /var/log/deephaven/authentication_server/AuthenticationServer.log.current
(This is a default path; it may be configurable in your environment.)
Comments
Example successful login event set:
[2023-07-24T16:40:01.585599-0400] - INFO - AuthServerServer: new client logged in to service AuthServer on SSLIOJob[job:2042956361/AuthServer_Server/10.128.14.50:9031->10.128.14.50:41860]
[2023-07-24T16:40:01.585624-0400] - INFO - ConnectionMonitor: registering monitored connection SSLIOJob[job:2042956361/AuthServer_Server/10.128.14.50:9031->10.128.14.50:41860]/CommandConnection
[2023-07-24T16:40:01.589169-0400] - INFO - Successfully authenticated local user iris
Failed login with error code
Source in Deephaven
Authentication server log - /var/log/deephaven/authentication_server/AuthenticationServer.log.current
(This is a default path; it may be configurable in your environment.)
Comments
Example failed login event set:
[2023-07-24T16:41:01.585599-0400] - INFO - AuthServerServer: new client logged in to service AuthServer on SSLIOJob[job:2042956361/AuthServer_Server/10.128.14.50:9031->10.128.14.50:41866]
[2023-07-24T16:41:01.585624-0400] - INFO - ConnectionMonitor: registering monitored connection SSLIOJob[job:2042956361/AuthServer_Server/10.128.14.50:9031->10.128.14.50:41866]/CommandConnection
[2023-07-24T16:41:01.589169-0400] - ERROR - Could not authenticate local user iris
Session activity: login/logout
Source in Deephaven
DbInternal.AuditEventLog
table- Authentication server log:
/var/log/deephaven/authentication_server/AuthenticationServer.log.current
(This is a default path; it may be configurable in your environment.)
Comments
Session-related activities, such as user logins and logouts, are recorded in the DbInternal.AuditEventLog
table. This table provides a structured, queryable history of significant events in the system. The DbInternal.AuditEventLog
table is the primary and recommended source for querying these structured events. Raw log files can be used for supplementary details or deeper troubleshooting if the table does not provide sufficient information.
Querying the AuditEventLog Table
You can query the AuditEventLog
table to retrieve session information. The following example shows how to find login and logout events for the past hour:
from deephaven import time as dhtu
# Get audit events for the last hour
audit_events = db.i("DbInternal", "AuditEventLog").where(
"Timestamp > dhtu.to_datetime('PT1H')"
)
# Filter for login/logout events and select relevant columns
session_activity = audit_events.where_in(
"Event", ["Session-Login", "Session-Logout"]
).select("Timestamp", "User", "Event", "Details")
Example Output
The query above would produce a table similar to the following:
Timestamp | User | Event | Details |
---|---|---|---|
2023-07-24T16:40:01.589169Z | iris | Session-Login | User 'iris' successfully logged in. |
2023-07-24T18:12:45.231882Z | iris | Session-Logout | User 'iris' logged out or session expired. |
Query Console Activity
Source in Deephaven
DbInternal.AuditEventLog
table
Comments
Events related to the lifecycle of a user's query console, such as starting and stopping a console session, are also logged in the AuditEventLog
. These events are useful for tracking interactive usage of the system. The DbInternal.AuditEventLog
table is the primary and recommended source for querying these structured events. Raw log files can be used for supplementary details or deeper troubleshooting if the table does not provide sufficient information.
- Login (starting a query console):
Event="Client registration"
with non-null user information. - Logoff (ending a query console):
Event="Client termination"
with non-null user information.
Querying for Console Events
The following query filters the AuditEventLog
for console registration and termination events:
from deephaven import time as dhtu
# Get audit events for the last 24 hours
audit_events = db.i("DbInternal", "AuditEventLog").where(
"Timestamp > dhtu.to_datetime('PT24H')"
)
# Filter for console lifecycle events
console_activity = audit_events.where_in(
"Event", ["Client registration", "Client termination"]
).select("Timestamp", "User", "Event", "Details")
Example Output
Timestamp | User | Event | Details |
---|---|---|---|
2023-07-25T10:05:15.123456Z | iris | Client registration | User 'iris' started a new query console. |
2023-07-25T11:30:00.654321Z | iris | Client termination | User 'iris' closed the query console. |
Access violation
Attempts to perform unauthorized functions are logged as access violations. These events are critical for security monitoring as they indicate when a user or process tries to exceed its permissions.
Source in Deephaven
DbInternal.AuditEventLog
table
Comments
Access violations, such as a user trying to read a table without the necessary permissions, are recorded as TablePermissionCheck
events in the AuditEventLog
.
Querying for Access Violations
You can filter the AuditEventLog
for events where a permission check failed. The following query finds all table access denials within the last day:
from deephaven import time as dhtu
# Get audit events for the last 24 hours
audit_events = db.i("DbInternal", "AuditEventLog").where("Timestamp > dhtu.to_datetime('PT24H')")
# Filter for access violation events
access_violations = audit_events\
.where("Event == `TablePermissionCheck`", "Details.contains(`denied`)`)\
.select("Timestamp", "User", "EffectiveUser", "Event", "Details")
Example Output
Timestamp | User | EffectiveUser | Event | Details |
---|---|---|---|---|
2023-06-19T15:28:59.099000Z | iris | carlos | TablePermissionCheck | Permission for table DbInternal.AuditEventLog denied for user \{iris operating as carlos\} |
Configuration changes
Source in Deephaven
- Log File:
/var/log/deephaven/misc/dhconfig.log.current
- Audit Table:
DbInternal.AuditEventLog
Comments
Changes to the system configuration are critical events to monitor. These are logged in two primary locations: a dedicated log file for the dhconfig
tool and the central AuditEventLog
table for a queryable history.
Querying for Configuration Changes
The following query retrieves all configuration change events from the AuditEventLog
for the past 7 days.
from deephaven import time as dhtu
# Get audit events for the last 7 days
audit_events = db.i("DbInternal", "AuditEventLog").where(
"Timestamp > dhtu.to_datetime('P7D')"
)
# Filter for configuration change events
config_changes = audit_events.where("Event == `Configuration Change`").select(
"Timestamp", "User", "Process", "Details"
)
Example Output
Timestamp | User | Process | Details |
---|---|---|---|
2023-08-20T11:00:00.123456Z | admin | ConfigurationServer | User 'admin' updated property 'deephaven.log.level' to 'DEBUG' |
2023-08-21T15:30:10.654321Z | root | DhconfigScript | User 'root' set 'start.port=8888' via dhconfig script. |
Password change
Source in Deephaven
- Audit Table:
DbInternal.AuditEventLog
Comments
Monitoring password changes is a fundamental security practice. Deephaven logs these events in the AuditEventLog
without exposing any sensitive information, ensuring that the change is recorded while the password itself remains secure.
Querying for Password Changes
The following query can be used to retrieve all password change events.
from deephaven import time as dhtu
# Get all audit events
audit_events = db.i("DbInternal", "AuditEventLog")
# Filter for password change events
password_changes = audit_events.where("Event == `Password Change`").select(
"Timestamp", "User", "Details"
)
Example Output
Timestamp | User | Details |
---|---|---|
2023-08-22T09:00:00.123456Z | iris | User 'iris' successfully changed their password. |
Database User Access Changes (DCL)
Source in Deephaven
- Audit Table:
DbInternal.AuditEventLog
Comments
Changes to user, role, or group permissions are made using Data Control Language (DCL) commands like GRANT
and REVOKE
. All DCL operations are logged in the AuditEventLog
to provide a clear audit trail of permission changes.
Querying for DCL Events
You can query the AuditEventLog
to find all DCL events.
from deephaven import time as dhtu
# Get DCL events from the last 30 days
dcl_events = (
db.i("DbInternal", "AuditEventLog")
.where("Timestamp > dhtu.to_datetime('P30D')", "Event == `DCL`")
.select("Timestamp", "User", "Details")
)
Example Output
Timestamp | User | Details |
---|---|---|
2023-08-23T14:00:00.123456Z | admin | User 'admin' executed DCL: GRANT READ ON table to user 'analyst' |
Analyzing table-based access to specific data
The Deephaven API provides a set of ACL provider classes that can be used to inspect groups, users, and table permissions. Building a report of specific data values to which each user or group has access can be done by iterating the group, user, and ACL data and querying the associated tables.
This example sketches the basic structure of such a query:
import com.illumon.iris.db.v2.permissions.DbAclProvider;
import com.illumon.iris.db.v2.permissions.DbAclProviderFactory;
import com.fishlib.auth.SimpleUserContext;
import com.fishlib.auth.UserContext;
import com.illumon.iris.db.v2.permissions.PermissionFilterProvider.FilterDetails;
ACLs = DbAclProviderFactory.getDbAclProvider(log);
users = ACLs.getUsersForGroup("iris-superusers"); // This is a built-in Deephaven group name, but for an audit query this would more likely use a custom group for the customer application
for (String user : users) {
println "Username: " + user;
UserContext ctx = new SimpleUserContext(user,user);
filters = ACLs.getFilterDetailsForUser(ctx);
for (FilterDetails filter : filters) {
// For an actual report, we would probably filter by namespace here and then, if needed, expand table wildcards and then merge unique instrument IDs across tables that match
println filter.getNamespace() + "/" + filter.getTableName();
}
}
users
, in this case, is a list of users in a group. In a multi-tenant customer application, rather than using a single group name, as is hard-coded in this example, the query might use a list or a table of groups and iterate through those in order to aggregate permissions for users that are members of multiple groups, or to show multiple sections – one per group.
For each user, this code gets a list of filters. Each filter includes a namespace, table name, and row-level ACL filter. This example code then simply prints the results out:
2019-12-04 08:48:53.653 STDOUT Username: g
2019-12-04 08:48:53.654 STDOUT DbInternal/AuditEventLog
2019-12-04 08:48:53.654 STDOUT DbInternal/PersistentQueryConfigurationLog
2019-12-04 08:48:53.655 STDOUT DbInternal/PersistentQueryConfigurationLogV2
2019-12-04 08:48:53.655 STDOUT DbInternal/PersistentQueryStateLog
2019-12-04 08:48:53.655 STDOUT DbInternal/ProcessEventLog
2019-12-04 08:48:53.655 STDOUT DbInternal/QueryOperationPerformanceLog
2019-12-04 08:48:53.655 STDOUT DbInternal/QueryPerformanceLog
2019-12-04 08:48:53.655 STDOUT DbInternal/UpdatePerformanceLog
2019-12-04 08:48:53.655 STDOUT DbInternal/WorkspaceData
2019-12-04 08:48:53.655 STDOUT LearnIris/*
2019-12-04 08:48:53.655 STDOUT Username: newUser
2019-12-04 08:48:53.656 STDOUT DbInternal/AuditEventLog
2019-12-04 08:48:53.657 STDOUT DbInternal/PersistentQueryConfigurationLog
2019-12-04 08:48:53.657 STDOUT DbInternal/PersistentQueryConfigurationLogV2
2019-12-04 08:48:53.657 STDOUT DbInternal/PersistentQueryStateLog
2019-12-04 08:48:53.657 STDOUT DbInternal/ProcessEventLog
2019-12-04 08:48:53.657 STDOUT DbInternal/QueryOperationPerformanceLog
2019-12-04 08:48:53.657 STDOUT DbInternal/QueryPerformanceLog
2019-12-04 08:48:53.657 STDOUT DbInternal/UpdatePerformanceLog
2019-12-04 08:48:53.657 STDOUT DbInternal/WorkspaceData
2019-12-04 08:48:53.658 STDOUT LearnIris/*
...
In an audit query, rather than printing the results, API methods such as db.getTableNames()
, could be used to expand wildcards, and then iterate across tables to get unique lists of accessible values.
// looking at tables in the LearnIris namespace
tableNames = db.getTableNames("LearnIris");
result = null;
for (String tn : tableNames) {
// get the list of columns for the table
columns = db.t("LearnIris",tn).getMeta().getColumn("Name").getDirect();
// check that the table contains expected columns
if (columns.contains("Date") && columns.contains("Sym")) {
if (result == null) {
result = db.t("LearnIris",tn).where("Date=`2017-08-25`").selectDistinct("Sym");
} else {
result = merge(result, db.t("LearnIris",tn).where("Date=`2017-08-25`").selectDistinct("Sym"));
}
}
}
if (result != null) {
// previous selectDistincts retrieved distinct Syms for each table; this one will get distinct Syms across all the tables
result = result.selectDistinct("Sym");
}