Query table configuration

This guide discusses how to control various QueryTable features that affect your Deephaven tables' latency and throughput.

QueryTable

QueryTable is Deephaven's primary implementation of our Table API.

The QueryTable has the following user-configurable properties:

CategoryPropertyDefault
MemoizationQueryTable.memoizeResultstrue
MemoizationQueryTable.redirectUpdatefalse
MemoizationQueryTable.redirectSelectfalse
MemoizationQueryTable.maximumStaticSelectMemoryOverhead1.1
DataIndexQueryTable.useDataIndexForWheretrue
DataIndexQueryTable.useDataIndexForAggregationtrue
DataIndexQueryTable.useDataIndexForJoinstrue
Pushdown predicates with whereQueryTable.disableWherePushdownDataIndexfalse
Pushdown predicates with whereQueryTable.disableWherePushdownParquetRowGroupMetadatafalse
Parallel processing with whereQueryTable.disableParallelWherefalse
Parallel processing with whereQueryTable.parallelWhereRowsPerSegment1 << 16
Parallel processing with whereQueryTable.parallelWhereSegments-1
Parallel processing with whereQueryTable.forceParallelWhere (test-focused)false
Parallel processing with selectQueryTable.enableParallelSelectAndUpdatetrue
Parallel processing with selectQueryTable.minimumParallelSelectRows1L << 22
Parallel processing with selectQueryTable.forceParallelSelectAndUpdate (test-focused)false
Parallel snapshottingQueryTable.enableParallelSnapshottrue
Parallel snapshottingQueryTable.minimumParallelSnapshotRows1L << 20
Ungroup operationsQueryTable.minimumUngroupBase10
SoftRecycler configurationarray.recycler.capacity.*1024
SoftRecycler configurationsparsearray.recycler.capacity.*1024
Stateless filters by defaultQueryTable.statelessFiltersByDefaultfalse

Each property is described below, roughly categorized by similarity.

Memoization

Deephaven utilizes memoization for many table operations to improve performance by eliminating duplicate work. See Query Memoization for more details.

It can be beneficial to disable memoization when benchmarking or testing, as memoized results can hide true operational costs and skew performance metrics.

Property NameDefault ValueDescription
QueryTable.memoizeResultstrueEnables memoizing table operations

Redirection

Deephaven Tables maintain a 63-bit keyspace that maps a logical row in row-key space to its data. Many of Deephaven's column sources use a multi-level data layout to avoid allocating more resources than necessary to fulfill operational requirements. See selection method properties for more details.

Redirection is a mapping between a parent column source and the resulting column source for a given operation. A sorted column, for example, is redirected from the original to present the rows in the targeted sort order. Redirection may also flatten from a sparse keyspace to a flat and dense keyspace.

Property NameDefault ValueDescription
QueryTable.redirectUpdatefalseForces non-flat refreshing QueryTable#update operations to redirect despite the increased performance costs
QueryTable.redirectSelectfalseForces non-flat refreshing QueryTable#select operations to redirect despite the increased performance costs
QueryTable.maximumStaticSelectMemoryOverhead1.1 (double)The maximum overhead as a fraction (e.g. 1.1 is 10% overhead; always sparse if < 0, never sparse if 0)

DataIndex

A Deephaven DataIndex is an index that can improve the speed of filtering operations.

Property NameDefault ValueDescription
QueryTable.useDataIndexForWheretrueEnables data index usage in QueryTable#where operations
QueryTable.useDataIndexForAggregationtrueEnables data index usage in QueryTable#aggBy, QueryTable#selectDistinct, within rollup-tables and tree-tables
QueryTable.useDataIndexForJoinstrueEnables data index usage in Deephaven Joins
QueryTable.disableWherePushdownDataIndexfalseDisables data index usage within where's pushdown predicates

Pushdown predicates with where

Pushdown predicates refer to the mechanism whereby filtering conditions are applied as early as possible, ideally at the data source (e.g., Parquet or other columnar formats), before loading data into the system. By annotating source reads with predicates, the engine pulls in only the rows that satisfy the conditions, significantly reducing I/O and improving performance.

Property NameDefault ValueDescription
QueryTable.useDataIndexForWheretrueEnables the uses of table-level data index during where operations.
QueryTable.disableWherePushdownDataIndexfalseDisables the use of data index within where's predicate pushdown.
QueryTable.disableWherePushdownParquetRowGroupMetadatafalseDisables the usage of Parquet row group metadata during push-down filtering.
QueryTable.disableWherePushdownMergedTablesfalseDisable predicate pushdown when filtering merged tables.
QueryTable.disableWherePushdownParquetDictionaryfalseDisables dictionary-encoding predicate pushdown operations.

Parallel processing with where

Parallelism for where operations is not enabled until the parent's size exceeds QueryTable.parallelWhereRowsPerSegment rows. This avoids the overhead of using threads for small operations. For tables larger than this threshold, the where operation uses a fixed number of parallel segments defined by QueryTable.parallelWhereSegments. These parameters can be tuned to avoid unnecessary parallelism when the overhead exceeds potential gains.

Property NameDefault ValueDescription
QueryTable.enableParallelWherefalseEnables parallelized optimizations for QueryTable#where operations
QueryTable.parallelWhereRowsPerSegment1 << 16The number of rows per segment when the number of segments is not fixed
QueryTable.parallelWhereSegments-1The number of segments to use when dividing all work equally into a fixed number of tasks; -1 implies one thread per core
QueryTable.forceParallelWhere (test-focused)falseForces Where operations to parallelize even when row requirements are not met

Parallel processing with select

The QueryTable operations select and update have performance enhancements that try to take advantage of parallelism during two separate phases of each table operation invocation. The first opportunity for parallelism is on the initial creation of the table. The engine will parallelize the initial computation of the resulting table state. The second opportunity for parallelism is when the operation's parent-table listener is notified that the parent was updated.

Parallelism for select operations is not enabled until the parent's size exceeds QueryTable.minimumParallelSelectRows rows. This can be tuned to avoid unnecessary parallelism (e.g., when the overhead exceeds potential gains).

Property NameDefault ValueDescription
QueryTable.enableParallelSelectAndUpdatetrueEnables parallelized optimizations for QueryTable#select and QueryTable#update operations
QueryTable.minimumParallelSelectRows1L << 22The minimum number of rows required to enable parallel select and update operations
QueryTable.forceParallelSelectAndUpdate (test-focused)falseForces Select and Update operations to parallelize even when row requirements are not met

Parallel snapshotting

Barrage clients, including our JavaScript implementation used on the web, fulfill subscription requests by snapshotting the required rows and columns in addition to listening for relevant changes when the table is refreshing. Parallel snapshotting is a feature that parallelizes this process across columns. If those columns are slow to access then parallel snapshotting will greatly reduce latency. However, parallel snapshotting may open many file handles to the same data source.

Parallel snapshotting is not enabled until the snapshot size exceeds QueryTable.minimumParallelSnapshotRows rows. This can be tuned to avoid unnecessary parallelism when the overhead exceeds potential gains.

Property NameDefault ValueDescription
QueryTable.enableParallelSnapshottrueEnables parallelized optimizations for snapshotting operations, such as Barrage subscription requests
QueryTable.minimumParallelSnapshotRows1L << 20The minimum number of rows required to enable parallel snapshotting operations

Ungroup operations

The ungroup table operation can expand one row into multiple rows. QueryTable.minimumUngroupBase controls the initial allocation used by ungroup.

Property NameDefault ValueDescription
QueryTable.minimumUngroupBase10The minimum base used for ungroup output row allocation (uses 2^base rows)

SoftRecycler configuration

Deephaven uses SoftRecycler objects to manage memory for array and sparse array column sources. These column sources must maintain previous values during an update graph cycle. Rather than allocating fresh memory on each cycle, when memory is needed to record previous values it is borrowed from the recycler and returned at the end of the update cycle. These pools can improve performance and reduce garbage collection pressure.

The capacity of these recyclers (how many arrays each recycler holds) can be configured on a per-type basis, allowing you to tune memory usage based on your workload characteristics.

Array column source recyclers

Array-backed column sources (dense arrays) use SoftRecyclers to manage blocks of data for each primitive type.

Property NameDefault ValueDescription
array.recycler.capacity.default1024Default recycler capacity for all array types (used if type-specific property is not set)
array.recycler.capacity.boolean1024Recycler capacity for boolean array blocks
array.recycler.capacity.byte1024Recycler capacity for byte array blocks
array.recycler.capacity.char1024Recycler capacity for character array blocks
array.recycler.capacity.double1024Recycler capacity for double array blocks
array.recycler.capacity.float1024Recycler capacity for float array blocks
array.recycler.capacity.int1024Recycler capacity for integer array blocks
array.recycler.capacity.long1024Recycler capacity for long array blocks
array.recycler.capacity.short1024Recycler capacity for short array blocks
array.recycler.capacity.object1024Recycler capacity for object array blocks
array.recycler.capacity.inuse9216 (max of all types)Recycler capacity for "in use" bitmap blocks (should be at least the maximum capacity of other types)

Sparse array column source recyclers

Sparse array column sources use a multi-level hierarchical structure and maintain separate recyclers at each level. Each level can be configured independently to optimize memory usage for your access patterns.

Property NameDefault ValueDescription
sparsearray.recycler.capacity.default1024Default recycler capacity for all sparse array types
sparsearray.recycler.capacity.boolean1024Base recycler capacity for boolean sparse arrays
sparsearray.recycler.capacity.byte1024Base recycler capacity for byte sparse arrays
sparsearray.recycler.capacity.char1024Base recycler capacity for character sparse arrays
sparsearray.recycler.capacity.double1024Base recycler capacity for double sparse arrays
sparsearray.recycler.capacity.float1024Base recycler capacity for float sparse arrays
sparsearray.recycler.capacity.int1024Base recycler capacity for integer sparse arrays
sparsearray.recycler.capacity.long1024Base recycler capacity for long sparse arrays
sparsearray.recycler.capacity.short1024Base recycler capacity for short sparse arrays
sparsearray.recycler.capacity.object1024Base recycler capacity for object sparse arrays
sparsearray.recycler.capacity.boolean.21024Level 2 recycler capacity for boolean sparse arrays
sparsearray.recycler.capacity.byte.21024Level 2 recycler capacity for byte sparse arrays
sparsearray.recycler.capacity.char.21024Level 2 recycler capacity for character sparse arrays
sparsearray.recycler.capacity.double.21024Level 2 recycler capacity for double sparse arrays
sparsearray.recycler.capacity.float.21024Level 2 recycler capacity for float sparse arrays
sparsearray.recycler.capacity.int.21024Level 2 recycler capacity for integer sparse arrays
sparsearray.recycler.capacity.long.21024Level 2 recycler capacity for long sparse arrays
sparsearray.recycler.capacity.short.21024Level 2 recycler capacity for short sparse arrays
sparsearray.recycler.capacity.object.21024Level 2 recycler capacity for object sparse arrays
sparsearray.recycler.capacity.boolean.11024Level 1 recycler capacity for boolean sparse arrays
sparsearray.recycler.capacity.byte.11024Level 1 recycler capacity for byte sparse arrays
sparsearray.recycler.capacity.char.11024Level 1 recycler capacity for character sparse arrays
sparsearray.recycler.capacity.double.11024Level 1 recycler capacity for double sparse arrays
sparsearray.recycler.capacity.float.11024Level 1 recycler capacity for float sparse arrays
sparsearray.recycler.capacity.int.11024Level 1 recycler capacity for integer sparse arrays
sparsearray.recycler.capacity.long.11024Level 1 recycler capacity for long sparse arrays
sparsearray.recycler.capacity.short.11024Level 1 recycler capacity for short sparse arrays
sparsearray.recycler.capacity.object.11024Level 1 recycler capacity for object sparse arrays
sparsearray.recycler.capacity.boolean.01024Level 0 (top) recycler capacity for boolean sparse arrays
sparsearray.recycler.capacity.byte.01024Level 0 (top) recycler capacity for byte sparse arrays
sparsearray.recycler.capacity.char.01024Level 0 (top) recycler capacity for character sparse arrays
sparsearray.recycler.capacity.double.01024Level 0 (top) recycler capacity for double sparse arrays
sparsearray.recycler.capacity.float.01024Level 0 (top) recycler capacity for float sparse arrays
sparsearray.recycler.capacity.int.01024Level 0 (top) recycler capacity for integer sparse arrays
sparsearray.recycler.capacity.long.01024Level 0 (top) recycler capacity for long sparse arrays
sparsearray.recycler.capacity.short.01024Level 0 (top) recycler capacity for short sparse arrays
sparsearray.recycler.capacity.object.01024Level 0 (top) recycler capacity for object sparse arrays
sparsearray.recycler.capacity.inuse9216 (sum of all base types)Recycler capacity for "in use" bitmap blocks at the lowest level
sparsearray.recycler.capacity.inuse.29216 (max of level 2)Recycler capacity for "in use" bitmap blocks at level 2
sparsearray.recycler.capacity.inuse.19216 (max of level 1)Recycler capacity for "in use" bitmap blocks at level 1
sparsearray.recycler.capacity.inuse.09216 (max of level 0)Recycler capacity for "in use" bitmap blocks at level 0 (top)

Tuning SoftRecycler capacity

The recycler capacity determines how many array blocks are kept in memory for potential reuse. Increasing capacity can improve performance if your workload uses more blocks within an update cycle than the recycler can hold, at the cost of higher baseline memory usage. Decreasing capacity reduces baseline memory requirements, but may increase garbage collection.

  • High throughput environments: Consider increasing capacities to reduce allocation/deallocation overhead.
  • Type-specific tuning: If certain types are used more frequently, you can increase their capacity while reducing others.

Stateless by default (experimental)

In a future release of Deephaven, the flags in this category will change from a default of false to a default of true. These flags enable the engine to assume more often that a given Filter or Selectable can be executed in parallel (unless the Filter or Selectable is marked serial or has barriers interface).

This is experimental; more details can be learned by reading the Javadoc on io.deephaven.api.ConcurrencyControl.

Property NameDefault ValueDescription
QueryTable.statelessFiltersByDefaultfalseEnables the engine to assume that filters are stateless by default, allowing for more optimizations