Internal tables

All Deephaven installations include several tables for the system's internal use. These internal tables are updated by various Deephaven processes, and contain details about the processes and workers. The schemas for these tables should never be changed by the customer as this will result in Deephaven failures. All tables are stored in the DbInternal namespace.

Just like other tables, authorized users can run queries against these internal tables. In a standard installation, a user can query entries related to their username, while a superuser can query all entries in a table.

Audit Event Log

The Audit Event Log (AuditEventLog) contains specific audit events from Deephaven processes. Each row contains details on one specific audit event.

Configuration

  • Any process that can write to the Audit Event Log can override several configuration items. All configuration overrides should be based on the process name or main class name.
    • <process name>.writeDatabaseAuditLogs - if true, then audit event logs will be created; if false, then audit event logs will not be created.
    • <process name>.useLas - if true, then the audit events will be written through the LAS; if false, then audit events will be written directly to binary log files.
    • <process name>.useMainClassNameForLogs - whether to use the class name for log entries; if false, then the retrieved value from the process.name property will be used instead of the class name.
  • RemoteQueryProcessor.logCommands - if workers are writing audit events, defines whether or not all received commands are logged. The default value is false.
  • LocalInternalPartition - if specified, defines the internal partition to which data will be written; if not defined, then a local partition based on the host name will be used.

The following processes have the option of writing audit logs.

ProcessMain Class Name
ACL Write ServerDbAclWriteServer
Authentication ServerAuthenticationServer
Data Import ServerDataImportServer
Local Table Data ServerLocalTableDataServer
Log Aggregator ServiceLogAggregatorService
Persistent Query ControllerPersistentQueryController
Query WorkersRemoteQueryProcessor
Remote Query Dispatcher (includes query and merge servers)RemoteQueryDispatcher
Remote User Table ServerRemoteUserTableServer
Table Data Cache ProxyTableDataCacheProxy
TailerLogtailerMain

Columns

Not all columns will apply to all events; only those applicable for a given event will be filled in and the rest will contain null values. For example, client hostnames and ports are only applicable to events which apply to client requests.

Column NameColumn TypeDescription
DateStringThe date on which the audit event was generated. This is the partitioning column.
TimestampDateTimeThe timestamp for the event.
ClientHostStringThe client's host name.
ClientPortintThe client's port ID.
ServerHostStringThe server's host name.
ServerPortintThe server's port ID.
ProcessStringThe process name generating the event. This will be either the value retrieved from the process.name property or the main class name.
AuthenticatedUserStringIf available, the authenticated user for the logged event.
EffectiveUserStringIf available, the effective user for the logged event.
NamespaceStringIf applicable, the namespace for the logged event.
TableStringIf applicable, the table name for the logged event.
IdintIf applicable, the ID for the logged event.
EventStringThe name of the event. See Auditable events by process for information on each event type.
DetailsStringFurther details on the logged event.

Auditable events by process

Each process logs specific events by name; this section defines the names in the Event column and what each name means.

All processes writing audit events will write the following events. Some processes will write further events as described below.

  • INITIALIZING - the process is initializing.
  • RUNNING - the process is running and starting to process normally.
  • SHUTTING_DOWN - the process is shutting down.

ACL Write Server

  • Add ACL - add an ACL
  • Add group strategy - add a group to a strategy
  • Add input table editor - add an input table editor group
  • Add member - add a member to one or more groups
  • Add strategy account - add an account to a strategy
  • Add user - add a new user
  • Change password - change a user's password
  • Delete ACL - delete an ACL
  • Delete group - delete a group
  • Delete group strategy - remove a group from a strategy
  • Delete strategy account - delete an account from a strategy
  • Delete input table editor - delete an input table editor group
  • Delete user - delete a user
  • Remove member - remove a member from one or more groups
  • Starting server - a server is starting to listen for ACL requests
  • Update ACL - update an ACL
  • Update input table editor - update an input table editor group

Authentication Server

  • Client registration - a client registered with the authentication server
  • Client termination - a client terminated
  • Starting server - a server is starting to listen for authentication requests

Persistent Query Controller

  • Client registration - a client registered with the persistent query controller
  • Client termination - a client terminated
  • Send script - a persistent query script is being sent to a client

Remote Query Dispatcher

  • Classpath additions - the classpath additions used for a worker start
  • Extra JVM arguments - any extra JVM arguments being used to start a worker
  • Pushed classes - the classes being pushed to a starting worker
  • Starting worker - a worker is being started

Worker audit events (RemoteQueryProcessor)

  • ACL details - full details on the ACLs for a user connected to the worker
  • Client async query - a client sent an asynchronous query request
  • Client command - a command was received from a client; only logged if the RemoteQueryProcessor.logCommands property is true.
  • Client disconnection - a client disconnected from the worker
  • Client sync query - a client sent a synchronous query request
  • Command cancel - a worker command was cancelled
  • Disconnect - a client disconnected from the worker
  • Primary client - the worker's primary client connected
  • Secondary client - a secondary client requested an action from the worker; the details contain further information
  • Script source (controller) - a script was loaded by requesting it from the persistent query controller; if an exception occurred, it will be shown in the details
  • Script source (remote) - a script was loaded from a remote source, which may be a console
  • Table Access - a user attempted to access a table

Worker audit events

From WorkspaceData table updates (successful writes are not audited, as they have been written to the WorkspaceData table).

  • WorkspaceData Authorization Failure - an unauthorized user tried to publish a change to the WorkspaceData table
  • WorkspaceData Write Failure - an unexpected error occurred writing a record to the WorkspaceData table

Persistent Query Configuration Log

Every time a persistent query is created or modified, details on the query are stored in the Persistent Query Configuration Log (PersistentQueryConfigurationLogV2) table. This log is used by Deephaven when a query is reverted to a previous version.

Columns

Column NameColumn TypeDescription
DateStringThe date on which the state change occurred. This is the partitioning column.
TimestampDateTimeThe timestamp for the state change.
SerialNumberlongThe query's serial number (a system-generated identifier unique to each persistent query).
VersionNumberlongThe query's version number (a number that starts at 1 for each query and is incremented each time the query is updated).
OwnerStringThe owner of the persistent query.
NameStringThe name of the persistent query.
EventTypelongThe query configuration event that triggered the log entry:
  • ADDED - a persistent query was added or modified
  • INITIAL - a controller logs all queries it is managing with EventType of INITIAL when it first starts
  • REMOVED - a persistent query was deleted
Enabledboolean
  • true - the query is enabled
  • false - the query is disabled (will not be started by the controller)
HeapSizeInGBdoubleThe heap size for the query in GB (the memory the query has available)
DataBufferPoolToHeapSizeRatiodoubleThe data buffer to heap size ratio - this is the fraction of the query's heap that will be dedicated to caching binary data from the underlying Deephaven data sources.
DetailedGCLoggingEnabledbooleanIf true, then Java garbage collection details are logged for the query
OmitDefaultGCParametersbooleanThis parameter is deprecated.
DbServerNameStringThe server on which the query will be run; this is shown as DB_SERVER_<number>, and these names are translated to physical or virtual servers by the controller.
RestartUsersStringSpecifies which group of users are allowed to restart the query:
  • Admin - only administrators of the query can restart it
  • AdminAndViewers - administrators and viewers of the query can restart it
  • ViewersWhenDown - administrators can restart the query, and viewers only if it is not running
ScriptCodeStringThe code for the script if it is not stored in git.
ScriptPathStringThe script path if it is stored in git.
ExtraJvmArgumentsString[]Extra JVM arguments to be passed to Java when the query is started.
ExtraEnvironmentVariablesString[]Environment variables to be set before the query is started.
ClassPathAdditionsString[]Additional elements to be added to the class path for the JVM.
AdminGroupsString[]Additional administrators for the query (i.e., users or groups that can edit, start, stop, or delete the query).
ViewerGroupsString[]Additional viewers for the query (i.e., users or groups that can view the query and its resulting tables).
ConfigurationTypeStringThe type of the configuration:
  • Live Query Replay - (ReplayScript) - a configuration that replays data.
  • Revert Helper - a Deephaven internal query that is used to help to revert persistent queries to previous versions; this should not be modified except by a system administrator.
  • Batch Query (RunAndDone) - a query that runs and terminates once the queries are complete.
  • Live Query (Script) - a query that runs until it is terminated.
SchedulingString[]Specifies the scheduling details for the query.
TimeoutlongTimeout value in milliseconds.
TypeSpecificFieldsjava.util.MapMap for fields specific to configuration types.
JVMProfileStringThe JVM profile to be used for the query.
LastModifiedByAuthenticatedStringThe authenticated user who last modified this query.
LastModifiedByEffectiveStringThe effective user who last modified this query.
LastModifiedTimeDateTimeThe last time this query was modified.

Persistent Query State Log

When a query worker is started or stopped, it goes through a series of state changes, representing the worker's status. Each state represents a specific condition, and every state change for every persistent query is stored in the Persistent Query State Log (PersistentQueryStateLog). Current states are also visible in the status column in the Query Configuration Deephaven console panel. States include:

  • Uninitialized - the query has no status as it has not been run
  • Connecting - the controller is connecting
  • Authenticating - the query is authenticating with the authentication server
  • AcquiringWorker - the dispatcher is creating a worker process for the query
  • Initializing - the worker process for the query is initializing
  • Running - the query worker is running (script query types only)
  • Failed - the query worker failed to start correctly; an exception should be visible
  • Error - the query generated an error
  • Disconnected - the query worker disconnected unexpectedly. This may occur, for example, if the JVM experiences an OutOfMemoryError, a Hotspot error (possibly caused by a buggy native module), or if the garbage collector pauses the JVM for longer than the heartbeat interval between the worker and dispatcher.
  • Stopped - the query stopped normally (Live Query (Script) query types only)
  • Completed - the query completed successfully [Batch Query (RunAndDone) query types only]
  • Executing - the query is executing [Batch Query (RunAndDone) query types only]

The Persistent Query State Log (PersistentQueryStateLog) contains every state change that has occurred for all persistent queries.

Columns

Column NameColumn TypeDescription
DateStringThe date on which the state change occurred. This is the partitioning column.
OwnerStringThe owner of the persistent query.
NameStringThe name of the persistent query.
TimestampDateTimeThe timestamp for the state change.
StatusStringThe new query status.
ServerHostStringThe host on which the query ran.
WorkerNameStringThe worker's name.
WorkerPortintThe worker's port for connections.
LastModifiedByAuthenticatedStringThe authenticated user who last modified this query.
LastModifiedByEffectiveStringThe effective user who last modified this query.
SerialNumberlongThe query's serial number (a system-generated identifier unique to each persistent query).
VersionNumberlongThe query's version number (a number that starts at 1 for each query and is incremented each time the query is updated).
TypeSpecificStateStringA type-specific state value; this won't apply to most queries.
ExceptionMessageStringIf applicable, the exception message for a failed query.
ExceptionStackTraceStringIf applicable, the full stack trace for a failed query.

Process Event Log

The Process Event Log (ProcessEventLog) contains all log messages from processes. Currently, only query workers and query servers can write their logs to the Process Event Log.

Configuration

The following configuration parameters define the process event log configuration.

RemoteQueryProcessor.sendLogsToSystemOut - if defined and set to true, tells the query workers to send their logs to standard system output. This cannot be used when writing to the process event log.

Any process that can write to the process event log can override several configuration items. All configuration overrides should be based on the process name or main class name.

  • <process name>.writeDatabaseProcessLogs - if true, then audit event logs will be created; if false, then audit event logs will not be created.
  • <process name>.useLas - if true, then the audit events will be written through the LAS; if false, then audit events will be written directly to binary log files.
  • <process name>.useMainClassNameForLogs - whether to use the class name for log entries; if false, then the retrieved value from the process.name property will be used instead of the class name.
  • <process name>.logLevel - the minimum log level event which will be written. The default value is INFO. Allowed values are:
    • FATAL
    • EMAIL
    • STDERR
    • ERROR
    • WARN
    • STDOUT
    • INFO
    • DEBUG
    • TRACE
  • <process name>.captureLog4j - if true, any output sent to the Log4J logger is written into the process event log.
  • <process name>.captureSysout - if true, any system output is written into the process event log.
  • <process name>.captureSyserr - if true, any error output is written into the process event log.
  • <process name>.aliveMessageSeconds - if non-zero, a message is periodically written to the process event log indicating that the process is still alive.
  • LocalInternalPartition - if specified, defines the internal partition to which data will be written; if not defined, then a local partition based on the host name will be used.

The same set of processes that can write to the audit event log can write to the process event log.

Columns

Column NameColumn TypeDescription
DateStringThe date on which the log event was generated. This is the partitioning column.
TimestampDateTimeThe timestamp for the logged event.
HostStringThe host name for the logged event.
LevelStringThe level for the event. This is usually one of the standard log levels (INFO, WARN, etc), but in the case of a worker output logged by the query server, it will instead indicate the level of captured output (STDOUT or STDERR).
ProcessStringThe name of the process that generated the event (e.g., RemoteQueryDispatcher, worker_1).
LastModifiedByAuthenticatedStringThe authenticated user who last modified this query.
LastModifiedByEffectiveStringThe effective user who last modified this query.
LogEntryStringThe logged event.

Write entries to CSV files

It is possible to write ProcessEventLog entries to CSV files. To turn this on, specify the following property for the dispatchers and workers:

ProcessEventLog.interceptor=com.illumon.iris.db.util.logging.ProcessEventLogInterceptorCsv

Also specify a full path name to a directory where the CSV files will be written with the property. This directory must be writable by all the processes that will generate these files; typically making it group-writable by dbmergegrp will be adequate:

ProcessEventLog.interceptor.csv.directory=/path/to/directory

CSV file names will consist of the following pattern:

<PQ name if available>-<process name>-<host name>-<optional GUID>.date/timestamp

Some messages (during initial worker startup and shutdown) will be logged in the dispatcher’s log instead of the workers' logs.

The following properties define the behavior:

  • ProcessEventLog.interceptor.csv.format - an optional CSV format, from org.apache.commons.csv.CSVFormat#Predefined. If none is specified, the default is Excel.
  • ProcessEventLog.interceptor.csv.delimiter - an optional delimiter. If none is specified (the property is non-existent or commented out), the default is taken from the CSV format. Delimiters must be one character.
  • ProcessEventLog.interceptor.csv.queueCapacity - to ensure that CSV writes do not affect performance, all CSV operations are submitted to a queue and performed off-thread. This specifies the queue’s capacity. If the capacity is exceeded (because the writer thread can’t keep up), further writes will hold up the process until there is available queue capacity. The default queue capacity is 1,000.
  • ProcessEventLog.interceptor.csv.rolloverDaily - if this is specified, the CSV files will roll over daily. The default is true. If the files are rolling over daily (or not at all), the date/timestamp will be in the format yyyy-MM-dd, such as 2021-04-03.
  • ProcessEventLog.interceptor.csv.rolloverHourly - if this is specified, the CSV files will roll over hourly. This takes precedence over daily rollover. The default is false. If the files are rolling over hourly, the date/timestamp will include a time and offset, such as 2021-04-29.150000.000-0400.
  • ProcessEventLog.interceptor.csv.timeZone - the time zone to be used for filenames and timestamps in the CSV files. The default is the system default time zone. This is the text name of the time zone, such as America/New_York.
  • ProcessEventLog.interceptor.csv.flushMessages - how frequently to flush the queue to disk (it will always flush when the queue is emptied). The default value is 100.

If you are not seeing CSV files being created, we recommend the following steps:

  1. Check the most recent startup log or the process event log for the worker. If there is a configuration error in the interceptor properties, this is where it will most likely show up. It is designed to not prevent process and worker startup if it is misconfigured.
  2. Check the permissions on the directory to which CSV files are being written. It will need to be writable by all the processes, typically dbmergegrp.
  3. Since the PQ name is part of the filename, special Linux file path characters can cause issues. For example, a forward-slash / will be interpreted as a directory separator. For this case, appropriate subdirectories will need to be created to hold the CSV files.

Process Info

The Process Info table (ProcessInfo) captures the system properties, JVM arguments, memory and memory pool info, and other initial conditions of processes on startup. Its information is intended primarily for debugging purposes.

To disable the ProcessInfo table, set the following property to false:

IrisLogDefaults.writeDatabaseProcessInfo=false

Columns

Column NameColumn TypeDescription
DateStringThe date on which the process was started. This is the partitioning column.
IDStringThe randomly generated process info ID. This will be globally unique.
TypeStringThe generic type.
KeyStringThe generic key.
ValueStringThe generic value.

Process Metrics

The Process Metrics table (ProcessMetrics) captures internal metrics that were previously written to the stats.log CSV.

To disable the ProcessMetrics table, or set the following property to false:

IrisLogDefaults.writeDatabaseProcessMetrics=false

Columns

Column NameColumn TypeDescription
DateStringThe date on which the information was generated. This is the partitioning column.
TimestampDateTimeThe timestamp for this event.
ProcessIDStringThe ProcessInfo ID that generated this event.
NameStringThe name of the metric.
IntervalStringThe timing interval for the metric.
TypeStringThe type of the metric.
Nlong
Sumlong
Lastlong
Minlong
Maxlong
Avglong
Sum2long
Stdevlong

Query Operation Performance Log

The Query Operation Performance Log (QueryOperationPerformanceLog) contains performance details on Deephaven query operations. Each query is broken up into its component parts for this log, allowing in-depth understanding of the performance impacts of each individual operation for a query.

Columns

Column NameColumn TypeDescription
DateStringThe date of the event
QueryIdlongThe ID of the query that logged the event. This is a value assigned by the system.
DispatcherNameStringThe name of the dispatcher that started the query.
ServerHostStringThe host on which the event was generated.
ClientHostStringThe client's host name.
PrimaryAuthenticatedUserStringThe authenticated user that is running the query.
PrimaryEffectiveUserStringThe effective user that is running the query.
OperationAuthenticatedUserStringThe authenticated user for this query operation.
OperationEffectiveUserStringThe effective user for this query operation.
RequestIdStringThe query operation's request ID.
WorkerNameStringThe name of the worker running the query.
ProcessInfoIdStringKey for joining with DbInternal/ProcessInfo on Id or DbInternal/ProcessMetrics on ProcessId.
OperationNumberintAn increasing number that indicates the order operations.
DescriptionStringInformation on the specific operation.
CallerLineStringAn automatically-determined "caller line" of code - the first element in the stack that does not begin with com.illumon.iris.db.
IsTopLevelbooleanWhether this operation is at the highest level of instrumentation, or whether it is enclosed by another instrumented operation.
IsCompilationbooleantrue if this operation appears to be a formula or column compilation.
StartTimeDateTimeThe start time of the operation.
EndTimeDateTimeThe end time of the operation.
DurationlongThe duration of the operation in nanoseconds.
CpuNanoslongCPU time in nanoseconds used by threads while processing for this query/operation/update.
UserCpuNanoslongUser mode CPU time in nanoseconds used by threads while processing for this query/operation/update.
FreeMemoryChangelongThe difference in free memory in bytes between the beginning and end of the operation.
TotalMemoryChangelongThe difference in the JVM's total memory in bytes between the beginning and end of the operation.
AllocatedByteslongMemory in bytes allocated by threads while processing for this query/operation/update.
PoolAllocatedByteslongReusable pool memory in bytes allocated by threads while processing for this query/operation/update.
InputSizeintThe size of the table being worked on as an int.
InputSizeLonglongThe size of the table being worked on as a long.
FirstTimeDataReadslongCount of data block reads incurred by this operation, for blocks not previously read by this worker.
RepeatedDataReadslongCount of data block reads incurred by this operation, for blocks previously read by this worker. These are blocks that have grown, or were otherwise no longer cached when needed.
AverageFirstTimeDataReadTimedoubleAverage read duration in nanoseconds for first time data reads.
AverageRepeatedDataReadTimedoubleAverage read duration in nanoseconds for repeated data reads.
WasInterruptedbooleantrue if this operation was interrupted due to an error or cancellation.
QueryNumberintA number which starts at 0 and is incremented within each worker for each new query.

Query Performance Log

The Query Performance Log (QueryPerformanceLog) contains details on query-level performance for each worker. A given worker may be running multiple queries; each will have its own set of query performance log entries.

Configuration

RemoteQueryDispatcher.logQueryPerformance - specifies whether or not to log query performance.

Columns

Column NameColumn TypeDescription
DateStringThe date of the event
QueryIdlongAn identifier that is incremented for each query started by a remote query dispatcher; the initial value is based on the time a dispatcher started.
DispatcherNameStringThe name of the dispatcher that started the query.
ServerHostStringThe host on which the event was generated.
ClientHostStringThe client's host name.
PrimaryAuthenticatedUserStringThe authenticated user that is running the query.
PrimaryEffectiveUserStringThe effective user that is running the query.
OperationAuthenticatedUserStringThe authenticated user for this query operation.
OperationEffectiveUserStringThe effective user for this query operation.
RequestIdStringThe query operation's request ID.
WorkerNameStringThe name of the worker that is running the query.
ProcessInfoIdStringKey for joining with DbInternal/ProcessInfo on Id or DbInternal/ProcessMetrics on ProcessId.
RequestDescriptionStringIndication of the request type (e.g., Console-<name>, PersistentQuery-<name>)
StartTimeDateTimeThe start time of the operation.
EndTimeDateTimeThe end time of the operation.
QueryClassNameStringThe name of the class reporting the details.
ClientNameStringAn identifier for the client which started this job.
JobNameStringAn identifier for the job running this query within the remote query dispatcher.
WorkerProcessIdintThe host's process ID for the worker.
QueryNumberintA number which starts at 0 and is incremented within each worker for each new query.
TimeoutlongThe maximum duration for the query.
DurationlongThe duration of this query performance log entry.
CpuNanoslongCPU time in nanoseconds used by threads while processing for this query/operation/update.
UserCpuNanoslongUser mode CPU time in nanoseconds used by threads while processing for this query/operation/update.
RequestedHeapSizelongThe requested heap size in MB for this worker, usually based on the persistent query configuration or console parameters.
WorkerHeapSizelongThe actual heap size in MB for this worker.
TotalMemoryFreelongThe amount of free heap at the end of the operation. See https://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#freeMemory--.
TotalMemoryUsedlongThe total amount of heap allocated by the JVM. See https://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#totalMemory--.
FreeMemoryChangelongThe difference in free memory in bytes between the beginning and end of the operation.
TotalMemoryChangelongThe difference in free memory in bytes between the beginning and end of the operation.
AllocatedByteslongMemory in bytes allocated by threads while processing for this query/operation/update.
PoolAllocatedByteslongReusable pool memory in bytes allocated by threads while processing for this query/operation/update.
FirstTimeDataReadslongCount of data block reads incurred by this query, for blocks not previously read by this worker.
RepeatedDataReadslongCount of data block reads incurred by this query, for blocks previously read by this worker. These are blocks that have grown, or were otherwise no longer cached when needed.
AverageFirstTimeDataReadTimedoubleAverage read duration in nanoseconds for first time data reads.
AverageRepeatedDataReadTimedoubleAverage read duration in nanoseconds for repeated data reads.
WasInterruptedbooleantrue if this query was interrupted due to an error or cancellation.
ResultClassNameStringThe resulting class for the query (usually the resulting class sent to a client).
ResultSizelongThe size in bytes of the query's serialized result.
IsReplayerbooleanThis column is no longer used.
ExceptionStringThe exception details if one was generated.

Resource Utilization

When a server selection provider is being used, the persistent query controller logs resource utilization events in the Resource Utilization (ResourceUtilization) table.

Columns

Column NameColumn TypeDescription
DateStringThe date on which the state change occurred. This is the partitioning column.
TimestampDateTimeThe timestamp of the event.
LoggingProcessNameStringThe process logging the event (usually PersistentQueryController).
ResourceProcessNameStringThe process name of the resource being tracked; this will be the dispatcher name.
HeapUsageMBintThe resource's current heap utilization in MB.
WorkerCountintThe resource's current worker count.
CommentStringA comment explaining the reason for the update, including a description of the worker where available. Valid reasons include:
  • Dispatcher connections or loss of connection
  • Internal notifications of starting persistent queries
  • Internal notifications of stopping persistent queries

Update Performance Log

The Update Performance Log (UpdatePerformanceLog) contains aggregated performance details on incremental update operations performed in the LiveTableMonitor loop.

Update performance logging allows three types of logging.

  1. Database - for workers only, this writes update performance data to the UpdatePerformanceLog table's intraday data
  2. Log - the update performance data is printed to standard out
  3. Listener - the update performance data is supplied to registered UpdatePerformanceTracker.Listeners; this can be used to programmatically handle the performance logs in-process.

The mode is driven by the UpdatePerformanceTracker.reportingMode property, which can be set to the following values:

  • NONE
  • LOG_ONLY
  • DB_ONLY
  • LISTENER_ONLY
  • LOG_AND_LISTENER
  • DB_AND_LISTENER
  • LOG_AND_DB
  • ALL

Columns

Column NameColumn TypeDescription
DateStringThe date of the event
ServerHostStringThe host on which the event was generated.
DispatcherNameStringThe name of the dispatcher that started the query.
WorkerNameStringThe name of the worker that is running the query.
ProcessInfoIdStringKey for joining with DbInternal/ProcessInfo on Id or DbInternal/ProcessMetrics on ProcessId.
WorkerStartTimeDateTimeThe time the worker started.
ClientHostStringThe client's host name.
PrimaryAuthenticatedUserStringThe authenticated user that is running the query.
PrimaryEffectiveUserStringThe effective user that is running the query.
QueryNameStringThe name of the query (for persistent queries, the name assigned to the query).
EntryIdlongA unique identifier for this operation, which can be used to identify a single operation across intervals.
EntryDescriptionStringA human-readable description of the operation.
EntryCallerLineStringThe class and line number that caused the generation of this log entry. A negative number indicates that the line number is not available.
IntervalStartTimeDateTimeThe start time for this performance interval.
IntervalEndTimeDateTimeThe end time for this performance interval.
IntervalDurationlongHow long, in nanoseconds, this performance interval took.
EntryIntervalUsagelongHow many nanoseconds this operation used within the given interval.
EntryIntervalCpuNanoslongCPU time in nanoseconds used by threads while processing for this query/operation/update.
EntryIntervalUserCpuNanoslongUser mode CPU time in nanoseconds used by threads while processing for this query/operation/update.
EntryIntervalAddedlongHow many rows were added for this operation, during the interval. EntryIntervalAdded, EntryIntervalRemoved, EntryIntervalModified, and EntryIntervalShifted provide an indication of how much data was processed by this operation during the interval.
EntryIntervalRemovedlongHow many rows were removed for this operation, during the interval. EntryIntervalAdded, EntryIntervalRemoved, EntryIntervalModified, and EntryIntervalShifted provide an indication of how much data was processed by this operation during the interval.
EntryIntervalModifiedlongHow many rows were modified for this operation, during the interval. EntryIntervalAdded, EntryIntervalRemoved, EntryIntervalModified, and EntryIntervalShifted provide an indication of how much data was processed by this operation during the interval.
EntryIntervalShiftedlongHow many rows are affected by shifts for this operation, during the interval. EntryIntervalAdded, EntryIntervalRemoved, EntryIntervalModified, and EntryIntervalShifted provide an indication of how much data was processed by this operation during the interval.
EntryIntervalInitialDataReadslongCount of data block reads incurred by this operation during the interval, for blocks not previously read by this worker.
EntryIntervalRepeatDataReadslongCount of data block reads incurred by this operation during the interval, for blocks previously read by this worker. These are blocks that have grown, or were otherwise no longer cached when needed.
EntryIntervalAverageInitialDataReadTimedoubleAverage read duration in nanoseconds for first time data reads incurred by this operation during the interval.
EntryIntervalAverageRepeatDataReadTimedoubleAverage read duration in nanoseconds for repeated data reads incurred by this operation during the interval.
TotalMemoryFreelongThe amount of free heap at the end of the last operation in this interval.
TotalMemoryUsedlongThe amount of total JVM heap at the end of the last operation in this interval.
EntryIntervalAllocatedByteslongMemory in bytes allocated by threads while processing for this query/operation/update.
EntryIntervalPoolAllocatedByteslongReusable pool memory in bytes allocated by threads while processing for this query/operation/update.

WorkspaceData

The WorkspaceData table contains saved details from the web interface user workspaces. This table is updated automatically when a user's workspace is saved, and normally will not need to be queried directly. Each time a workspace is saved or deleted a row is added to this table, and the latest row for a given workspace indicates its current state.

Columns

Column NameColumn TypeDescription
DateStringThe date on which the row was generated (as defined by the LastModifiedTime column value). This is the partitioning column.
OwnerStringThe workspace owner.
NameStringThe workspace name.
IdStringA system-assigned identifier that uniquely identifies this workspace.
VersionintThe workspace version.
DataTypeStringThe system-assigned data type for the saved data.
DataStringThe saved workspace data.
StatusStringThe status, which indicates whether it is for an active or deleted workspace.
AdminGroupsString[]The groups which are allowed to administer this entry.
ViewerGroupsString[]The groups which are allowed to view this entry.
LastModifiedByAuthenticatedStringThe authenticated user who created this row.
LastModifiedByEffectiveStringThe effective user who created this row.
LastModifiedTimeDateTimeThe date and time when this row was created.