Internal tables
All Deephaven installations include several tables for the system's internal use. These internal tables are updated by various Deephaven processes, and contain details about the processes and workers. The schemas for these tables should never be changed by the customer as this will result in Deephaven failures. All tables are stored in the DbInternal namespace.
Just like other tables, authorized users can run queries against these internal tables. In a standard installation, a user can query entries related to their username, while a superuser can query all entries in a table.
Audit Event Log
The Audit Event Log (AuditEventLog
) contains specific audit events from Deephaven processes. Each row contains details on one specific audit event.
Configuration
- Any process that can write to the Audit Event Log can override several configuration items. All configuration overrides should be based on the process name or main class name.
<process name>.writeDatabaseAuditLogs
- iftrue
, then audit event logs will be created; iffalse
, then audit event logs will not be created.<process name>.useLas
- iftrue
, then the audit events will be written through the LAS; iffalse
, then audit events will be written directly to binary log files.<process name>.useMainClassNameForLogs
- whether to use the class name for log entries; iffalse
, then the retrieved value from the process.name property will be used instead of the class name.
RemoteQueryProcessor.logCommands
- if workers are writing audit events, defines whether or not all received commands are logged. The default value isfalse
.LocalInternalPartition
- if specified, defines the internal partition to which data will be written; if not defined, then a local partition based on the host name will be used.
The following processes have the option of writing audit logs.
Process | Main Class Name |
---|---|
ACL Write Server | DbAclWriteServer |
Authentication Server | AuthenticationServer |
Data Import Server | DataImportServer |
Local Table Data Server | LocalTableDataServer |
Log Aggregator Service | LogAggregatorService |
Persistent Query Controller | PersistentQueryController |
Query Workers | RemoteQueryProcessor |
Remote Query Dispatcher (includes query and merge servers) | RemoteQueryDispatcher |
Remote User Table Server | RemoteUserTableServer |
Table Data Cache Proxy | TableDataCacheProxy |
Tailer | LogtailerMain |
Columns
Not all columns will apply to all events; only those applicable for a given event will be filled in and the rest will contain null values. For example, client hostnames and ports are only applicable to events which apply to client requests.
Column Name | Column Type | Description |
---|---|---|
Date | String | The date on which the audit event was generated. This is the partitioning column. |
Timestamp | DateTime | The timestamp for the event. |
ClientHost | String | The client's host name. |
ClientPort | int | The client's port ID. |
ServerHost | String | The server's host name. |
ServerPort | int | The server's port ID. |
Process | String | The process name generating the event. This will be either the value retrieved from the process.name property or the main class name. |
AuthenticatedUser | String | If available, the authenticated user for the logged event. |
EffectiveUser | String | If available, the effective user for the logged event. |
Namespace | String | If applicable, the namespace for the logged event. |
Table | String | If applicable, the table name for the logged event. |
Id | int | If applicable, the ID for the logged event. |
Event | String | The name of the event. See Auditable events by process for information on each event type. |
Details | String | Further details on the logged event. |
Auditable events by process
Each process logs specific events by name; this section defines the names in the Event
column and what each name means.
All processes writing audit events will write the following events. Some processes will write further events as described below.
INITIALIZING
- the process is initializing.RUNNING
- the process is running and starting to process normally.SHUTTING_DOWN
- the process is shutting down.
ACL Write Server
- Add ACL - add an ACL
- Add group strategy - add a group to a strategy
- Add input table editor - add an input table editor group
- Add member - add a member to one or more groups
- Add strategy account - add an account to a strategy
- Add user - add a new user
- Change password - change a user's password
- Delete ACL - delete an ACL
- Delete group - delete a group
- Delete group strategy - remove a group from a strategy
- Delete strategy account - delete an account from a strategy
- Delete input table editor - delete an input table editor group
- Delete user - delete a user
- Remove member - remove a member from one or more groups
- Starting server - a server is starting to listen for ACL requests
- Update ACL - update an ACL
- Update input table editor - update an input table editor group
Authentication Server
- Client registration - a client registered with the authentication server
- Client termination - a client terminated
- Starting server - a server is starting to listen for authentication requests
Persistent Query Controller
- Client registration - a client registered with the persistent query controller
- Client termination - a client terminated
- Send script - a persistent query script is being sent to a client
Remote Query Dispatcher
- Classpath additions - the classpath additions used for a worker start
- Extra JVM arguments - any extra JVM arguments being used to start a worker
- Pushed classes - the classes being pushed to a starting worker
- Starting worker - a worker is being started
Worker audit events (RemoteQueryProcessor)
- ACL details - full details on the ACLs for a user connected to the worker
- Client async query - a client sent an asynchronous query request
- Client command - a command was received from a client; only logged if the
RemoteQueryProcessor.logCommands
property istrue
. - Client disconnection - a client disconnected from the worker
- Client sync query - a client sent a synchronous query request
- Command cancel - a worker command was cancelled
- Disconnect - a client disconnected from the worker
- Primary client - the worker's primary client connected
- Secondary client - a secondary client requested an action from the worker; the details contain further information
- Script source (controller) - a script was loaded by requesting it from the persistent query controller; if an exception occurred, it will be shown in the details
- Script source (remote) - a script was loaded from a remote source, which may be a console
- Table Access - a user attempted to access a table
Worker audit events
From WorkspaceData
table updates (successful writes are not audited, as they have been written to the WorkspaceData
table).
- WorkspaceData Authorization Failure - an unauthorized user tried to publish a change to the
WorkspaceData
table - WorkspaceData Write Failure - an unexpected error occurred writing a record to the
WorkspaceData
table
Persistent Query Configuration Log
Every time a persistent query is created or modified, details on the query are stored in the Persistent Query Configuration Log (PersistentQueryConfigurationLogV2
) table. This log is used by Deephaven when a query is reverted to a previous version.
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date on which the state change occurred. This is the partitioning column. |
Timestamp | DateTime | The timestamp for the state change. |
SerialNumber | long | The query's serial number (a system-generated identifier unique to each persistent query). |
VersionNumber | long | The query's version number (a number that starts at 1 for each query and is incremented each time the query is updated). |
Owner | String | The owner of the persistent query. |
Name | String | The name of the persistent query. |
EventType | long | The query configuration event that triggered the log entry:
|
Enabled | boolean |
|
HeapSizeInGB | double | The heap size for the query in GB (the memory the query has available) |
DataBufferPoolToHeapSizeRatio | double | The data buffer to heap size ratio - this is the fraction of the query's heap that will be dedicated to caching binary data from the underlying Deephaven data sources. |
DetailedGCLoggingEnabled | boolean | If true , then Java garbage collection details are logged for the query |
OmitDefaultGCParameters | boolean | This parameter is deprecated. |
DbServerName | String | The server on which the query will be run; this is shown as DB_SERVER_<number> , and these names are translated to physical or virtual servers by the controller. |
RestartUsers | String | Specifies which group of users are allowed to restart the query:
|
ScriptCode | String | The code for the script if it is not stored in git. |
ScriptPath | String | The script path if it is stored in git. |
ExtraJvmArguments | String[] | Extra JVM arguments to be passed to Java when the query is started. |
ExtraEnvironmentVariables | String[] | Environment variables to be set before the query is started. |
ClassPathAdditions | String[] | Additional elements to be added to the class path for the JVM. |
AdminGroups | String[] | Additional administrators for the query (i.e., users or groups that can edit, start, stop, or delete the query). |
ViewerGroups | String[] | Additional viewers for the query (i.e., users or groups that can view the query and its resulting tables). |
ConfigurationType | String | The type of the configuration:
|
Scheduling | String[] | Specifies the scheduling details for the query. |
Timeout | long | Timeout value in milliseconds. |
TypeSpecificFields | java.util.Map | Map for fields specific to configuration types. |
JVMProfile | String | The JVM profile to be used for the query. |
LastModifiedByAuthenticated | String | The authenticated user who last modified this query. |
LastModifiedByEffective | String | The effective user who last modified this query. |
LastModifiedTime | DateTime | The last time this query was modified. |
Persistent Query State Log
When a query worker is started or stopped, it goes through a series of state changes, representing the worker's status. Each state represents a specific condition, and every state change for every persistent query is stored in the Persistent Query State Log (PersistentQueryStateLog). Current states are also visible in the status column in the Query Configuration Deephaven console panel. States include:
- Uninitialized - the query has no status as it has not been run
- Connecting - the controller is connecting
- Authenticating - the query is authenticating with the authentication server
- AcquiringWorker - the dispatcher is creating a worker process for the query
- Initializing - the worker process for the query is initializing
- Running - the query worker is running (script query types only)
- Failed - the query worker failed to start correctly; an exception should be visible
- Error - the query generated an error
- Disconnected - the query worker disconnected unexpectedly. This may occur, for example, if the JVM experiences an OutOfMemoryError, a Hotspot error (possibly caused by a buggy native module), or if the garbage collector pauses the JVM for longer than the heartbeat interval between the worker and dispatcher.
- Stopped - the query stopped normally (Live Query (Script) query types only)
- Completed - the query completed successfully [Batch Query (RunAndDone) query types only]
- Executing - the query is executing [Batch Query (RunAndDone) query types only]
The Persistent Query State Log (PersistentQueryStateLog
) contains every state change that has occurred for all persistent queries.
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date on which the state change occurred. This is the partitioning column. |
Owner | String | The owner of the persistent query. |
Name | String | The name of the persistent query. |
Timestamp | DateTime | The timestamp for the state change. |
Status | String | The new query status. |
ServerHost | String | The host on which the query ran. |
WorkerName | String | The worker's name. |
WorkerPort | int | The worker's port for connections. |
LastModifiedByAuthenticated | String | The authenticated user who last modified this query. |
LastModifiedByEffective | String | The effective user who last modified this query. |
SerialNumber | long | The query's serial number (a system-generated identifier unique to each persistent query). |
VersionNumber | long | The query's version number (a number that starts at 1 for each query and is incremented each time the query is updated). |
TypeSpecificState | String | A type-specific state value; this won't apply to most queries. |
ExceptionMessage | String | If applicable, the exception message for a failed query. |
ExceptionStackTrace | String | If applicable, the full stack trace for a failed query. |
Process Event Log
The Process Event Log (ProcessEventLog
) contains all log messages from processes. Currently, only query workers and query servers can write their logs to the Process Event Log.
Configuration
The following configuration parameters define the process event log configuration.
RemoteQueryProcessor.sendLogsToSystemOut
- if defined and set to true
, tells the query workers to send their logs to standard system output. This cannot be used when writing to the process event log.
Any process that can write to the process event log can override several configuration items. All configuration overrides should be based on the process name or main class name.
<process name>.writeDatabaseProcessLogs
- iftrue
, then audit event logs will be created; iffalse
, then audit event logs will not be created.<process name>.useLas
- iftrue
, then the audit events will be written through the LAS; iffalse
, then audit events will be written directly to binary log files.<process name>.useMainClassNameForLogs
- whether to use the class name for log entries; iffalse
, then the retrieved value from the process.name property will be used instead of the class name.<process name>.logLevel
- the minimum log level event which will be written. The default value isINFO
. Allowed values are:FATAL
EMAIL
STDERR
ERROR
WARN
STDOUT
INFO
DEBUG
TRACE
<process name>.captureLog4j
- iftrue
, any output sent to the Log4J logger is written into the process event log.<process name>.captureSysout
- iftrue
, any system output is written into the process event log.<process name>.captureSyserr
- iftrue
, any error output is written into the process event log.<process name>.aliveMessageSeconds
- if non-zero, a message is periodically written to the process event log indicating that the process is still alive.LocalInternalPartition
- if specified, defines the internal partition to which data will be written; if not defined, then a local partition based on the host name will be used.
The same set of processes that can write to the audit event log can write to the process event log.
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date on which the log event was generated. This is the partitioning column. |
Timestamp | DateTime | The timestamp for the logged event. |
Host | String | The host name for the logged event. |
Level | String | The level for the event. This is usually one of the standard log levels (INFO , WARN , etc), but in the case of a worker output logged by the query server, it will instead indicate the level of captured output (STDOUT or STDERR ). |
Process | String | The name of the process that generated the event (e.g., RemoteQueryDispatcher, worker_1) . |
LastModifiedByAuthenticated | String | The authenticated user who last modified this query. |
LastModifiedByEffective | String | The effective user who last modified this query. |
LogEntry | String | The logged event. |
Write entries to CSV files
It is possible to write ProcessEventLog entries to CSV files. To turn this on, specify the following property for the dispatchers and workers:
ProcessEventLog.interceptor=com.illumon.iris.db.util.logging.ProcessEventLogInterceptorCsv
Also specify a full path name to a directory where the CSV files will be written with the property. This directory must be writable by all the processes that will generate these files; typically making it group-writable by dbmergegrp
will be adequate:
ProcessEventLog.interceptor.csv.directory=/path/to/directory
CSV file names will consist of the following pattern:
<PQ name if available>-<process name>-<host name>-<optional GUID>.date/timestamp
Some messages (during initial worker startup and shutdown) will be logged in the dispatcher’s log instead of the workers' logs.
The following properties define the behavior:
ProcessEventLog.interceptor.csv.format
- an optional CSV format, from org.apache.commons.csv.CSVFormat#Predefined. If none is specified, the default is Excel.ProcessEventLog.interceptor.csv.delimiter
- an optional delimiter. If none is specified (the property is non-existent or commented out), the default is taken from the CSV format. Delimiters must be one character.ProcessEventLog.interceptor.csv.queueCapacity
- to ensure that CSV writes do not affect performance, all CSV operations are submitted to a queue and performed off-thread. This specifies the queue’s capacity. If the capacity is exceeded (because the writer thread can’t keep up), further writes will hold up the process until there is available queue capacity. The default queue capacity is 1,000.ProcessEventLog.interceptor.csv.rolloverDaily
- if this is specified, the CSV files will roll over daily. The default istrue
. If the files are rolling over daily (or not at all), the date/timestamp will be in the formatyyyy-MM-dd
, such as 2021-04-03.ProcessEventLog.interceptor.csv.rolloverHourly
- if this is specified, the CSV files will roll over hourly. This takes precedence over daily rollover. The default isfalse
. If the files are rolling over hourly, the date/timestamp will include a time and offset, such as 2021-04-29.150000.000-0400.ProcessEventLog.interceptor.csv.timeZone
- the time zone to be used for filenames and timestamps in the CSV files. The default is the system default time zone. This is the text name of the time zone, such asAmerica/New_York
.ProcessEventLog.interceptor.csv.flushMessages
- how frequently to flush the queue to disk (it will always flush when the queue is emptied). The default value is 100.
If you are not seeing CSV files being created, we recommend the following steps:
- Check the most recent startup log or the process event log for the worker. If there is a configuration error in the interceptor properties, this is where it will most likely show up. It is designed to not prevent process and worker startup if it is misconfigured.
- Check the permissions on the directory to which CSV files are being written. It will need to be writable by all the processes, typically
dbmergegrp
. - Since the PQ name is part of the filename, special Linux file path characters can cause issues. For example, a forward-slash
/
will be interpreted as a directory separator. For this case, appropriate subdirectories will need to be created to hold the CSV files.
Process Info
The Process Info table (ProcessInfo
) captures the system properties, JVM arguments, memory and memory pool info, and other initial conditions of processes on startup. Its information is intended primarily for debugging purposes.
To disable the ProcessInfo
table, set the following property to false
:
IrisLogDefaults.writeDatabaseProcessInfo=false
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date on which the process was started. This is the partitioning column. |
ID | String | The randomly generated process info ID. This will be globally unique. |
Type | String | The generic type. |
Key | String | The generic key. |
Value | String | The generic value. |
Process Metrics
The Process Metrics table (ProcessMetrics
) captures internal metrics that were previously written to the stats.log
CSV.
To disable the ProcessMetrics
table, or set the following property to false
:
IrisLogDefaults.writeDatabaseProcessMetrics=false
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date on which the information was generated. This is the partitioning column. |
Timestamp | DateTime | The timestamp for this event. |
ProcessID | String | The ProcessInfo ID that generated this event. |
Name | String | The name of the metric. |
Interval | String | The timing interval for the metric. |
Type | String | The type of the metric. |
N | long | |
Sum | long | |
Last | long | |
Min | long | |
Max | long | |
Avg | long | |
Sum2 | long | |
Stdev | long |
Query Operation Performance Log
The Query Operation Performance Log (QueryOperationPerformanceLog
) contains performance details on Deephaven query operations. Each query is broken up into its component parts for this log, allowing in-depth understanding of the performance impacts of each individual operation for a query.
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date of the event |
QueryId | long | The ID of the query that logged the event. This is a value assigned by the system. |
DispatcherName | String | The name of the dispatcher that started the query. |
ServerHost | String | The host on which the event was generated. |
ClientHost | String | The client's host name. |
PrimaryAuthenticatedUser | String | The authenticated user that is running the query. |
PrimaryEffectiveUser | String | The effective user that is running the query. |
OperationAuthenticatedUser | String | The authenticated user for this query operation. |
OperationEffectiveUser | String | The effective user for this query operation. |
RequestId | String | The query operation's request ID. |
WorkerName | String | The name of the worker running the query. |
ProcessInfoId | String | Key for joining with DbInternal/ProcessInfo on Id or DbInternal/ProcessMetrics on ProcessId. |
OperationNumber | int | An increasing number that indicates the order operations. |
Description | String | Information on the specific operation. |
CallerLine | String | An automatically-determined "caller line" of code - the first element in the stack that does not begin with com.illumon.iris.db . |
IsTopLevel | boolean | Whether this operation is at the highest level of instrumentation, or whether it is enclosed by another instrumented operation. |
IsCompilation | boolean | true if this operation appears to be a formula or column compilation. |
StartTime | DateTime | The start time of the operation. |
EndTime | DateTime | The end time of the operation. |
Duration | long | The duration of the operation in nanoseconds. |
CpuNanos | long | CPU time in nanoseconds used by threads while processing for this query/operation/update. |
UserCpuNanos | long | User mode CPU time in nanoseconds used by threads while processing for this query/operation/update. |
FreeMemoryChange | long | The difference in free memory in bytes between the beginning and end of the operation. |
TotalMemoryChange | long | The difference in the JVM's total memory in bytes between the beginning and end of the operation. |
AllocatedBytes | long | Memory in bytes allocated by threads while processing for this query/operation/update. |
PoolAllocatedBytes | long | Reusable pool memory in bytes allocated by threads while processing for this query/operation/update. |
InputSize | int | The size of the table being worked on as an int. |
InputSizeLong | long | The size of the table being worked on as a long. |
FirstTimeDataReads | long | Count of data block reads incurred by this operation, for blocks not previously read by this worker. |
RepeatedDataReads | long | Count of data block reads incurred by this operation, for blocks previously read by this worker. These are blocks that have grown, or were otherwise no longer cached when needed. |
AverageFirstTimeDataReadTime | double | Average read duration in nanoseconds for first time data reads. |
AverageRepeatedDataReadTime | double | Average read duration in nanoseconds for repeated data reads. |
WasInterrupted | boolean | true if this operation was interrupted due to an error or cancellation. |
QueryNumber | int | A number which starts at 0 and is incremented within each worker for each new query. |
Query Performance Log
The Query Performance Log (QueryPerformanceLog
) contains details on query-level performance for each worker. A given worker may be running multiple queries; each will have its own set of query performance log entries.
Configuration
RemoteQueryDispatcher.logQueryPerformance
- specifies whether or not to log query performance.
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date of the event |
QueryId | long | An identifier that is incremented for each query started by a remote query dispatcher; the initial value is based on the time a dispatcher started. |
DispatcherName | String | The name of the dispatcher that started the query. |
ServerHost | String | The host on which the event was generated. |
ClientHost | String | The client's host name. |
PrimaryAuthenticatedUser | String | The authenticated user that is running the query. |
PrimaryEffectiveUser | String | The effective user that is running the query. |
OperationAuthenticatedUser | String | The authenticated user for this query operation. |
OperationEffectiveUser | String | The effective user for this query operation. |
RequestId | String | The query operation's request ID. |
WorkerName | String | The name of the worker that is running the query. |
ProcessInfoId | String | Key for joining with DbInternal/ProcessInfo on Id or DbInternal/ProcessMetrics on ProcessId. |
RequestDescription | String | Indication of the request type (e.g., Console-<name> , PersistentQuery-<name> ) |
StartTime | DateTime | The start time of the operation. |
EndTime | DateTime | The end time of the operation. |
QueryClassName | String | The name of the class reporting the details. |
ClientName | String | An identifier for the client which started this job. |
JobName | String | An identifier for the job running this query within the remote query dispatcher. |
WorkerProcessId | int | The host's process ID for the worker. |
QueryNumber | int | A number which starts at 0 and is incremented within each worker for each new query. |
Timeout | long | The maximum duration for the query. |
Duration | long | The duration of this query performance log entry. |
CpuNanos | long | CPU time in nanoseconds used by threads while processing for this query/operation/update. |
UserCpuNanos | long | User mode CPU time in nanoseconds used by threads while processing for this query/operation/update. |
RequestedHeapSize | long | The requested heap size in MB for this worker, usually based on the persistent query configuration or console parameters. |
WorkerHeapSize | long | The actual heap size in MB for this worker. |
TotalMemoryFree | long | The amount of free heap at the end of the operation. See https://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#freeMemory--. |
TotalMemoryUsed | long | The total amount of heap allocated by the JVM. See https://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#totalMemory--. |
FreeMemoryChange | long | The difference in free memory in bytes between the beginning and end of the operation. |
TotalMemoryChange | long | The difference in free memory in bytes between the beginning and end of the operation. |
AllocatedBytes | long | Memory in bytes allocated by threads while processing for this query/operation/update. |
PoolAllocatedBytes | long | Reusable pool memory in bytes allocated by threads while processing for this query/operation/update. |
FirstTimeDataReads | long | Count of data block reads incurred by this query, for blocks not previously read by this worker. |
RepeatedDataReads | long | Count of data block reads incurred by this query, for blocks previously read by this worker. These are blocks that have grown, or were otherwise no longer cached when needed. |
AverageFirstTimeDataReadTime | double | Average read duration in nanoseconds for first time data reads. |
AverageRepeatedDataReadTime | double | Average read duration in nanoseconds for repeated data reads. |
WasInterrupted | boolean | true if this query was interrupted due to an error or cancellation. |
ResultClassName | String | The resulting class for the query (usually the resulting class sent to a client). |
ResultSize | long | The size in bytes of the query's serialized result. |
IsReplayer | boolean | This column is no longer used. |
Exception | String | The exception details if one was generated. |
Resource Utilization
When a server selection provider is being used, the persistent query controller logs resource utilization events in the Resource Utilization (ResourceUtilization) table.
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date on which the state change occurred. This is the partitioning column. |
Timestamp | DateTime | The timestamp of the event. |
LoggingProcessName | String | The process logging the event (usually PersistentQueryController ). |
ResourceProcessName | String | The process name of the resource being tracked; this will be the dispatcher name. |
HeapUsageMB | int | The resource's current heap utilization in MB. |
WorkerCount | int | The resource's current worker count. |
Comment | String | A comment explaining the reason for the update, including a description of the worker where available. Valid reasons include:
|
Update Performance Log
The Update Performance Log (UpdatePerformanceLog
) contains aggregated performance details on incremental update operations performed in the LiveTableMonitor
loop.
Update performance logging allows three types of logging.
- Database - for workers only, this writes update performance data to the
UpdatePerformanceLog
table's intraday data - Log - the update performance data is printed to standard out
- Listener - the update performance data is supplied to registered
UpdatePerformanceTracker.Listeners
; this can be used to programmatically handle the performance logs in-process.
The mode is driven by the UpdatePerformanceTracker.reportingMode
property, which can be set to the following values:
NONE
LOG_ONLY
DB_ONLY
LISTENER_ONLY
LOG_AND_LISTENER
DB_AND_LISTENER
LOG_AND_DB
ALL
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date of the event |
ServerHost | String | The host on which the event was generated. |
DispatcherName | String | The name of the dispatcher that started the query. |
WorkerName | String | The name of the worker that is running the query. |
ProcessInfoId | String | Key for joining with DbInternal/ProcessInfo on Id or DbInternal/ProcessMetrics on ProcessId. |
WorkerStartTime | DateTime | The time the worker started. |
ClientHost | String | The client's host name. |
PrimaryAuthenticatedUser | String | The authenticated user that is running the query. |
PrimaryEffectiveUser | String | The effective user that is running the query. |
QueryName | String | The name of the query (for persistent queries, the name assigned to the query). |
EntryId | long | A unique identifier for this operation, which can be used to identify a single operation across intervals. |
EntryDescription | String | A human-readable description of the operation. |
EntryCallerLine | String | The class and line number that caused the generation of this log entry. A negative number indicates that the line number is not available. |
IntervalStartTime | DateTime | The start time for this performance interval. |
IntervalEndTime | DateTime | The end time for this performance interval. |
IntervalDuration | long | How long, in nanoseconds, this performance interval took. |
EntryIntervalUsage | long | How many nanoseconds this operation used within the given interval. |
EntryIntervalCpuNanos | long | CPU time in nanoseconds used by threads while processing for this query/operation/update. |
EntryIntervalUserCpuNanos | long | User mode CPU time in nanoseconds used by threads while processing for this query/operation/update. |
EntryIntervalAdded | long | How many rows were added for this operation, during the interval. EntryIntervalAdded , EntryIntervalRemoved , EntryIntervalModified , and EntryIntervalShifted provide an indication of how much data was processed by this operation during the interval. |
EntryIntervalRemoved | long | How many rows were removed for this operation, during the interval. EntryIntervalAdded , EntryIntervalRemoved , EntryIntervalModified , and EntryIntervalShifted provide an indication of how much data was processed by this operation during the interval. |
EntryIntervalModified | long | How many rows were modified for this operation, during the interval. EntryIntervalAdded , EntryIntervalRemoved , EntryIntervalModified , and EntryIntervalShifted provide an indication of how much data was processed by this operation during the interval. |
EntryIntervalShifted | long | How many rows are affected by shifts for this operation, during the interval. EntryIntervalAdded , EntryIntervalRemoved , EntryIntervalModified , and EntryIntervalShifted provide an indication of how much data was processed by this operation during the interval. |
EntryIntervalInitialDataReads | long | Count of data block reads incurred by this operation during the interval, for blocks not previously read by this worker. |
EntryIntervalRepeatDataReads | long | Count of data block reads incurred by this operation during the interval, for blocks previously read by this worker. These are blocks that have grown, or were otherwise no longer cached when needed. |
EntryIntervalAverageInitialDataReadTime | double | Average read duration in nanoseconds for first time data reads incurred by this operation during the interval. |
EntryIntervalAverageRepeatDataReadTime | double | Average read duration in nanoseconds for repeated data reads incurred by this operation during the interval. |
TotalMemoryFree | long | The amount of free heap at the end of the last operation in this interval. |
TotalMemoryUsed | long | The amount of total JVM heap at the end of the last operation in this interval. |
EntryIntervalAllocatedBytes | long | Memory in bytes allocated by threads while processing for this query/operation/update. |
EntryIntervalPoolAllocatedBytes | long | Reusable pool memory in bytes allocated by threads while processing for this query/operation/update. |
WorkspaceData
The WorkspaceData
table contains saved details from the web interface user workspaces. This table is updated automatically when a user's workspace is saved, and normally will not need to be queried directly. Each time a workspace is saved or deleted a row is added to this table, and the latest row for a given workspace indicates its current state.
Columns
Column Name | Column Type | Description |
---|---|---|
Date | String | The date on which the row was generated (as defined by the LastModifiedTime column value). This is the partitioning column. |
Owner | String | The workspace owner. |
Name | String | The workspace name. |
Id | String | A system-assigned identifier that uniquely identifies this workspace. |
Version | int | The workspace version. |
DataType | String | The system-assigned data type for the saved data. |
Data | String | The saved workspace data. |
Status | String | The status, which indicates whether it is for an active or deleted workspace. |
AdminGroups | String[] | The groups which are allowed to administer this entry. |
ViewerGroups | String[] | The groups which are allowed to view this entry. |
LastModifiedByAuthenticated | String | The authenticated user who created this row. |
LastModifiedByEffective | String | The effective user who created this row. |
LastModifiedTime | DateTime | The date and time when this row was created. |