Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
A lightweight object describing the exposed field.
Field | Type | Label | Description |
typed_ticket | TypedTicket |
|
|
field_name | string |
|
|
field_description | string |
|
|
application_name | string | display-friendly identification |
|
application_id | string | computer-friendly identification |
Represents a batch of fields.
Field | Type | Label | Description |
created | FieldInfo | repeated |
|
updated | FieldInfo | repeated |
|
removed | FieldInfo | repeated |
|
Intentionally empty and is here for backwards compatibility should this API change.
Allows clients to list fields that are accessible to them.
Method Name | Request Type | Response Type | Description |
ListFields | ListFieldsRequest | FieldsChangeUpdate stream | Request the list of the fields exposed via the worker. - The first received message contains all fields that are currently available on the worker. None of these fields will be RemovedFields. - Subsequent messages modify the existing state. Fields are identified by their ticket and may be replaced or removed. |
Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
Field | Type | Label | Description |
config_values | AuthenticationConstantsResponse.ConfigValuesEntry | repeated |
|
Field | Type | Label | Description |
key | string |
|
|
value | ConfigValue |
|
Field | Type | Label | Description |
string_value | string |
|
Field | Type | Label | Description |
config_values | ConfigurationConstantsResponse.ConfigValuesEntry | repeated |
|
Field | Type | Label | Description |
key | string |
|
|
value | ConfigValue |
|
Provides simple configuration data to users. Unauthenticated users may call GetAuthenticationConstants
to discover hints on how they should proceed with providing their identity, while already-authenticated
clients may call GetConfigurationConstants for details on using the platform.
Method Name | Request Type | Response Type | Description |
GetAuthenticationConstants | AuthenticationConstantsRequest | AuthenticationConstantsResponse | |
GetConfigurationConstants | ConfigurationConstantsRequest | ConfigurationConstantsResponse |
Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket |
|
|
request_id | int32 |
|
|
open_document | OpenDocumentRequest | Starts a document in a given console - to end, just close the stream, the server will hang up right away |
|
change_document | ChangeDocumentRequest | Modifies the document that autocomplete can be requested on |
|
get_completion_items | GetCompletionItemsRequest | Requests that a response be sent back with completion items |
|
get_signature_help | GetSignatureHelpRequest | Request for help about the method signature at the cursor |
|
get_hover | GetHoverRequest | Request for help about what the user is hovering over |
|
get_diagnostic | GetDiagnosticRequest | Request to perform file diagnostics |
|
close_document | CloseDocumentRequest | Closes the document, indicating that it will not be referenced again |
Field | Type | Label | Description |
request_id | int32 |
|
|
success | bool |
|
|
completion_items | GetCompletionItemsResponse |
|
|
signatures | GetSignatureHelpResponse |
|
|
hover | GetHoverResponse |
|
|
diagnostic | GetPullDiagnosticResponse |
|
|
diagnostic_publish | GetPublishDiagnosticResponse |
|
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket |
|
|
variable_name | string |
|
|
table_id | io.deephaven.proto.backplane.grpc.Ticket |
|
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket |
|
|
request_id | int32 |
|
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket |
|
|
command_id | io.deephaven.proto.backplane.grpc.Ticket |
|
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket | Deprecated. |
|
text_document | VersionedTextDocumentIdentifier |
|
|
content_changes | ChangeDocumentRequest.TextDocumentContentChangeEvent | repeated |
|
Name | Option |
console_id | true |
Field | Type | Label | Description |
range | DocumentRange |
|
|
range_length | int32 |
|
|
text | string |
|
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket | Deprecated. |
|
text_document | VersionedTextDocumentIdentifier |
|
Name | Option |
console_id | true |
Field | Type | Label | Description |
trigger_kind | int32 |
|
|
trigger_character | string |
|
Field | Type | Label | Description |
start | int32 |
|
|
length | int32 |
|
|
label | string |
|
|
kind | int32 |
|
|
detail | string |
|
|
deprecated | bool |
|
|
preselect | bool |
|
|
text_edit | TextEdit |
|
|
sort_text | string |
|
|
filter_text | string |
|
|
insert_text_format | int32 |
|
|
additional_text_edits | TextEdit | repeated |
|
commit_characters | string | repeated |
|
documentation | MarkupContent |
|
Field | Type | Label | Description |
range | DocumentRange |
|
|
severity | Diagnostic.DiagnosticSeverity |
|
|
code | string | optional |
|
code_description | Diagnostic.CodeDescription | optional |
|
source | string | optional |
|
message | string |
|
|
tags | Diagnostic.DiagnosticTag | repeated |
|
data | bytes | optional |
|
Field | Type | Label | Description |
href | string |
|
Field | Type | Label | Description |
start | Position |
|
|
end | Position |
|
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket |
|
|
code | string |
|
Field | Type | Label | Description |
error_message | string |
|
|
changes | io.deephaven.proto.backplane.grpc.FieldsChangeUpdate |
|
Field | Type | Label | Description |
title | string | optional |
|
title_font | string |
|
|
title_color | string |
|
|
update_interval | int64 |
|
|
cols | int32 |
|
|
rows | int32 |
|
|
charts | FigureDescriptor.ChartDescriptor | repeated |
|
errors | string | repeated |
|
Field | Type | Label | Description |
id | string |
|
|
format_type | FigureDescriptor.AxisDescriptor.AxisFormatType |
|
|
type | FigureDescriptor.AxisDescriptor.AxisType |
|
|
position | FigureDescriptor.AxisDescriptor.AxisPosition |
|
|
log | bool |
|
|
label | string |
|
|
label_font | string |
|
|
ticks_font | string |
|
|
format_pattern | string | optional |
|
color | string |
|
|
min_range | double |
|
|
max_range | double |
|
|
minor_ticks_visible | bool |
|
|
major_ticks_visible | bool |
|
|
minor_tick_count | int32 |
|
|
gap_between_major_ticks | double | optional |
|
major_tick_locations | double | repeated |
|
tick_label_angle | double |
|
|
invert | bool |
|
|
is_time_axis | bool |
|
|
business_calendar_descriptor | FigureDescriptor.BusinessCalendarDescriptor |
|
Field | Type | Label | Description |
default_bool | bool | optional |
|
keys | string | repeated |
|
values | bool | repeated |
|
Field | Type | Label | Description |
name | string |
|
|
time_zone | string |
|
|
business_days | FigureDescriptor.BusinessCalendarDescriptor.DayOfWeek | repeated |
|
business_periods | FigureDescriptor.BusinessCalendarDescriptor.BusinessPeriod | repeated |
|
holidays | FigureDescriptor.BusinessCalendarDescriptor.Holiday | repeated |
|
Field | Type | Label | Description |
open | string |
|
|
close | string |
|
Field | Type | Label | Description |
date | FigureDescriptor.BusinessCalendarDescriptor.LocalDate |
|
|
business_periods | FigureDescriptor.BusinessCalendarDescriptor.BusinessPeriod | repeated |
|
Field | Type | Label | Description |
year | int32 |
|
|
month | int32 |
|
|
day | int32 |
|
Field | Type | Label | Description |
colspan | int32 |
|
|
rowspan | int32 |
|
|
series | FigureDescriptor.SeriesDescriptor | repeated |
|
multi_series | FigureDescriptor.MultiSeriesDescriptor | repeated |
|
axes | FigureDescriptor.AxisDescriptor | repeated |
|
chart_type | FigureDescriptor.ChartDescriptor.ChartType |
|
|
title | string | optional |
|
title_font | string |
|
|
title_color | string |
|
|
show_legend | bool |
|
|
legend_font | string |
|
|
legend_color | string |
|
|
is3d | bool |
|
|
column | int32 |
|
|
row | int32 |
|
Field | Type | Label | Description |
default_double | double | optional |
|
keys | string | repeated |
|
values | double | repeated |
|
Field | Type | Label | Description |
plot_style | FigureDescriptor.SeriesPlotStyle |
|
|
name | string |
|
|
line_color | FigureDescriptor.StringMapWithDefault |
|
|
point_color | FigureDescriptor.StringMapWithDefault |
|
|
lines_visible | FigureDescriptor.BoolMapWithDefault |
|
|
points_visible | FigureDescriptor.BoolMapWithDefault |
|
|
gradient_visible | FigureDescriptor.BoolMapWithDefault |
|
|
point_label_format | FigureDescriptor.StringMapWithDefault |
|
|
x_tool_tip_pattern | FigureDescriptor.StringMapWithDefault |
|
|
y_tool_tip_pattern | FigureDescriptor.StringMapWithDefault |
|
|
point_label | FigureDescriptor.StringMapWithDefault |
|
|
point_size | FigureDescriptor.DoubleMapWithDefault |
|
|
point_shape | FigureDescriptor.StringMapWithDefault |
|
|
data_sources | FigureDescriptor.MultiSeriesSourceDescriptor | repeated |
|
Field | Type | Label | Description |
axis_id | string |
|
|
type | FigureDescriptor.SourceType |
|
|
partitioned_table_id | int32 |
|
|
column_name | string |
|
Field | Type | Label | Description |
columns | string | repeated |
|
column_types | string | repeated |
|
require_all_filters_to_display | bool |
|
Field | Type | Label | Description |
plot_style | FigureDescriptor.SeriesPlotStyle |
|
|
name | string |
|
|
lines_visible | bool | optional |
|
shapes_visible | bool | optional |
|
gradient_visible | bool |
|
|
line_color | string |
|
|
point_label_format | string | optional |
|
x_tool_tip_pattern | string | optional |
|
y_tool_tip_pattern | string | optional |
|
shape_label | string |
|
|
shape_size | double | optional |
|
shape_color | string |
|
|
shape | string |
|
|
data_sources | FigureDescriptor.SourceDescriptor | repeated |
|
Field | Type | Label | Description |
axis_id | string |
|
|
type | FigureDescriptor.SourceType |
|
|
table_id | int32 |
|
|
partitioned_table_id | int32 |
|
|
column_name | string |
|
|
column_type | string |
|
|
one_click | FigureDescriptor.OneClickDescriptor |
|
Field | Type | Label | Description |
default_string | string | optional |
|
keys | string | repeated |
|
values | string | repeated |
|
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket | Deprecated. |
|
context | CompletionContext |
|
|
text_document | VersionedTextDocumentIdentifier |
|
|
position | Position |
|
|
request_id | int32 | Deprecated. |
Name | Option |
console_id | true |
request_id | true |
Field | Type | Label | Description |
items | CompletionItem | repeated |
|
request_id | int32 | Deprecated. Maintained for backwards compatibility. Use the same field on AutoCompleteResponse instead |
|
success | bool | Deprecated. Maintained for backwards compatibility. Use the same field on AutoCompleteResponse instead |
Name | Option |
request_id | true |
success | true |
left empty for future compatibility
Field | Type | Label | Description |
console_types | string | repeated |
|
Field | Type | Label | Description |
text_document | VersionedTextDocumentIdentifier |
|
|
identifier | string | optional |
|
previous_result_id | string | optional |
|
left empty for future compatibility
Field | Type | Label | Description |
max_memory | int64 | Returns the maximum amount of memory that the Java virtual machine will attempt to use. If there is no inherent limit then the value Long.MAX_VALUE will be returned. the maximum amount of memory that the virtual machine will attempt to use, measured in bytes |
|
total_memory | int64 | Returns the total amount of memory in the Java virtual machine. The value returned by this method may vary over time, depending on the host environment. Note that the amount of memory required to hold an object of any given type may be implementation-dependent. the total amount of memory currently available for current and future objects, measured in bytes. |
|
free_memory | int64 | Returns the amount of free memory in the Java Virtual Machine. Calling the gc method may result in increasing the value returned by freeMemory. an approximation to the total amount of memory currently available for future allocated objects, measured in bytes. |
Field | Type | Label | Description |
text_document | VersionedTextDocumentIdentifier |
|
|
position | Position |
|
Field | Type | Label | Description |
contents | MarkupContent |
|
|
range | DocumentRange |
|
Field | Type | Label | Description |
uri | string |
|
|
version | int32 | optional |
|
diagnostics | Diagnostic | repeated |
|
Field | Type | Label | Description |
kind | string |
|
|
result_id | string | optional |
|
items | Diagnostic | repeated |
|
Field | Type | Label | Description |
context | SignatureHelpContext |
|
|
text_document | VersionedTextDocumentIdentifier |
|
|
position | Position |
|
Field | Type | Label | Description |
signatures | SignatureInformation | repeated |
|
active_signature | int32 | optional |
|
active_parameter | int32 | optional |
|
Field | Type | Label | Description |
micros | int64 |
|
|
log_level | string |
|
|
message | string |
|
Presently you get _all_ logs, not just your console. A future version might take a specific console_id to
restrict this to a single console.
Field | Type | Label | Description |
last_seen_log_timestamp | int64 | Ticket console_id = 1; If a non-zero value is specified, represents the timestamp in microseconds since the unix epoch when the client last saw a message. Technically this might skip messages if more than one message was logged at the same microsecond that connection was lost - to avoid this, subtract one from the last seen message's micros, and expect to receive some messages that have already been seen. |
|
levels | string | repeated |
|
Field | Type | Label | Description |
kind | string |
|
|
value | string |
|
Field | Type | Label | Description |
console_id | io.deephaven.proto.backplane.grpc.Ticket | Deprecated. |
|
text_document | TextDocumentItem |
|
Name | Option |
console_id | true |
Field | Type | Label | Description |
label | string |
|
|
documentation | MarkupContent |
|
Field | Type | Label | Description |
line | int32 |
|
|
character | int32 |
|
Field | Type | Label | Description |
trigger_kind | int32 |
|
|
trigger_character | string | optional |
|
is_retrigger | bool |
|
|
active_signature_help | GetSignatureHelpResponse |
|
Field | Type | Label | Description |
label | string |
|
|
documentation | MarkupContent |
|
|
parameters | ParameterInformation | repeated |
|
active_parameter | int32 | optional |
|
Field | Type | Label | Description |
result_id | io.deephaven.proto.backplane.grpc.Ticket |
|
|
session_type | string |
|
Field | Type | Label | Description |
result_id | io.deephaven.proto.backplane.grpc.Ticket |
|
Field | Type | Label | Description |
uri | string |
|
|
language_id | string |
|
|
version | int32 |
|
|
text | string |
|
Field | Type | Label | Description |
range | DocumentRange |
|
|
text | string |
|
Field | Type | Label | Description |
uri | string |
|
|
version | int32 |
|
Name | Number | Description |
NOT_SET_SEVERITY | 0 | |
ERROR | 1 | |
WARNING | 2 | |
INFORMATION | 3 | |
HINT | 4 |
Name | Number | Description |
NOT_SET_TAG | 0 | |
UNNECESSARY | 1 | |
DEPRECATED | 2 |
Name | Number | Description |
CATEGORY | 0 | |
NUMBER | 1 |
Name | Number | Description |
TOP | 0 | |
BOTTOM | 1 | |
LEFT | 2 | |
RIGHT | 3 | |
NONE | 4 |
Name | Number | Description |
X | 0 | |
Y | 1 | |
SHAPE | 2 | |
SIZE | 3 | |
LABEL | 4 | |
COLOR | 5 |
Name | Number | Description |
SUNDAY | 0 | |
MONDAY | 1 | |
TUESDAY | 2 | |
WEDNESDAY | 3 | |
THURSDAY | 4 | |
FRIDAY | 5 | |
SATURDAY | 6 |
Name | Number | Description |
XY | 0 | |
PIE | 1 | |
OHLC | 2 | |
CATEGORY | 3 | |
XYZ | 4 | |
CATEGORY_3D | 5 | |
TREEMAP | 6 |
Name | Number | Description |
BAR | 0 | |
STACKED_BAR | 1 | |
LINE | 2 | |
AREA | 3 | |
STACKED_AREA | 4 | |
PIE | 5 | |
HISTOGRAM | 6 | |
OHLC | 7 | |
SCATTER | 8 | |
STEP | 9 | |
ERROR_BAR | 10 | |
TREEMAP | 11 |
Name | Number | Description |
X | 0 | |
Y | 1 | |
Z | 2 | |
X_LOW | 3 | |
X_HIGH | 4 | |
Y_LOW | 5 | |
Y_HIGH | 6 | |
TIME | 7 | |
OPEN | 8 | |
HIGH | 9 | |
LOW | 10 | |
CLOSE | 11 | |
SHAPE | 12 | |
SIZE | 13 | |
LABEL | 14 | |
COLOR | 15 | |
PARENT | 16 | |
HOVER_TEXT | 17 | |
TEXT | 18 |
Console interaction service
Method Name | Request Type | Response Type | Description |
GetConsoleTypes | GetConsoleTypesRequest | GetConsoleTypesResponse | |
StartConsole | StartConsoleRequest | StartConsoleResponse | |
GetHeapInfo | GetHeapInfoRequest | GetHeapInfoResponse | |
SubscribeToLogs | LogSubscriptionRequest | LogSubscriptionData stream | |
ExecuteCommand | ExecuteCommandRequest | ExecuteCommandResponse | |
CancelCommand | CancelCommandRequest | CancelCommandResponse | |
BindTableToVariable | BindTableToVariableRequest | BindTableToVariableResponse | |
AutoCompleteStream | AutoCompleteRequest stream | AutoCompleteResponse stream | Starts a stream for autocomplete on the current session. More than one console, more than one document can be edited at a time using this, and they can separately be closed as well. A given document should only be edited within one stream at a time. |
CancelAutoComplete | CancelAutoCompleteRequest | CancelAutoCompleteResponse | |
OpenAutoCompleteStream | AutoCompleteRequest | AutoCompleteResponse stream | Half of the browser-based (browser's can't do bidirectional streams without websockets) implementation for AutoCompleteStream. |
NextAutoCompleteStream | AutoCompleteRequest | BrowserNextResponse | Other half of the browser-based implementation for AutoCompleteStream. |
Field | Type | Label | Description |
result_hierarchical_table_id | Ticket | Ticket to use to hold the result HierarchicalTable (RollupTable or TreeTable) from the applying the operations |
|
input_hierarchical_table_id | Ticket | Ticket for the input HierarchicalTable (RollupTable or TreeTable) to apply operations to |
|
filters | Condition | repeated | Filters to apply to the input HierarchicalTable to produce the result HierarchicalTable. Never expressed against the "structural" columns included in the a HierarchicalTableDescriptor's snapshot_schema. For RollupTables, only the group-by columns may be filtered. The names are always expressed as they appear in aggregated node columns (and in the group-by columns). The filtering will result in a complete or partial new Table.rollup operation. For TreeTables, these may be variously applied to the source (resulting in a new Table.tree operation) or to the nodes (resulting in filtering at snapshot time). |
sorts | SortDescriptor | repeated | Sorts to apply to the input HierarchicalTable to produce the result HierarchicalTable. Never expressed against the "structural" columns included in the a HierarchicalTableDescriptor's snapshot_schema. For TreeTables, these are simply applied to the nodes at snapshot time. For RollupTables, these are expressed against the aggregated node columns, and will be applied to the appropriate input (constituent) columns as well. The appropriate (aggregated or constituent) sorts are applied to the nodes at snapshot time. |
Deliberately empty response, use /ObjectService/FetchObject to access the result_hierarchical_table_id ticket as
a HierarchicalTableDescriptor. See HierarchicalTableDescriptor documentation for details.
Field | Type | Label | Description |
snapshot_schema | bytes | Schema to be used for snapshot or subscription requests as described in Arrow Message.fbs::Message. Field metadata is used to convey additional information about the structure of the HierarchicalTable, the special roles some columns play, and the relationships between columns. "hierarchicalTable.isStructuralColumn" is always "true" if set, and is set on columns that should be included on every snapshot or subscription request, but should not be directly user-visible. "hierarchicalTable.isExpandByColumn" is always "true" if set, and is set on all the columns that must be included in a HierarchicalTableViewRequest's key table, if a key table is specified. These columns are generally user-visible and displayed before other columns, unless they also have "hierarchicalTable.isStructuralColumn" set. "hierarchicalTable.isRowDepthColumn" is always "true" if set, and is set on a single column that specifies the depth of a row. That column will always have "hierarchicalTable.isExpandByColumn" set for RollupTables, but never for TreeTables. "hierarchicalTable.isRowExpandedColumn" is always "true" if set, and is set on a single nullable column of booleans that specifies whether a row is expandable or expanded. Values will be null for rows that are not expandable, true for expanded rows, false for rows that are not expanded (but expandable). Leaf rows have no children to expand, and hence will always have a null value for this column. "rollupTable.isAggregatedNodeColumn" is always "true" if set, and is set on all columns of a RollupTable that belong to the aggregated nodes. "rollupTable.isConstituentNodeColumn" is always "true" if set, and is set on all columns of a RollupTable that belong to the constituent nodes. No such columns will be present if constituents are not included in the RollupTable. "rollupTable.isGroupByColumn" is always "true" if set, and is set on all columns of a RollupTable that are "group-by columns", whether the node is aggregated or constituent. All nodes have the same names and types for columns labeled in this way. Such columns will always have "hierarchicalTable.isExpandByColumn" set if and only if they also have "rollupTable.isAggregatedNodeColumn" set. "rollupTable.aggregationInputColumnName" is set to the (string) name of the corresponding constituent column that was used as input to this aggregation node column. May have an empty value, because some aggregations take no input columns, for example "Count". This is only ever present on columns with "rollupTable.isAggregatedNodeColumn" set. "treeTable.isNodeColumn" is always "true" if set, and is set on all columns of a TreeTable that nodes inherit from the source Table. "treeTable.isIdentifierColumn" is always "true" if set, and is set on the single column that uniquely identifies a TreeTable row and links it to its children. Such columns will always have "hierarchicalTable.isExpandByColumn" set. "treeTable.isParentIdentifierColumn" is always "true" if set, and is set on the single column that links a TreeTable row to its parent row. |
|
is_static | bool | Whether or not this table might change. |
Field | Type | Label | Description |
result_table_id | Ticket | Ticket to use to hold an export of the HierarchicalTable's source Table |
|
hierarchical_table_id | Ticket | Ticket for the (existing) HierarchicalTable (RollupTable or TreeTable) to export the source Table for |
Field | Type | Label | Description |
key_table_id | Ticket | Ticket that represents a Table of expanded or contracted keys from a HierarchicalTable (RollupTable or TreeTable). The format for the key Table is dictated by the schema from the corresponding HierarchicalTableDescriptor. It is expected to have one column for each "expand-by column", including the "row depth column" for RollupTables only, and (optionally) an "action" column whose name is specified in the key_table_action_column field. If the Table is empty the result will have only default nodes expanded. |
|
key_table_action_column | string | optional | The name of a column of bytes found in the key table that specifies the action desired for the node selected by the other columns for each row. Takes on the value 1 for nodes that should be expanded, 3 for nodes that should be expanded along with their descendants, and 4 for nodes that should be contracted. If this column name is not present, all nodes in the key table will be expanded without their descendants. |
Field | Type | Label | Description |
result_view_id | Ticket | Ticket to use to hold the result HierarchicalTableView |
|
hierarchical_table_id | Ticket | Ticket for the HierarchicalTable (RollupTable or TreeTable) to expand |
|
existing_view_id | Ticket | Ticket for an existing HierarchicalTableView. The result view will inherit the HierarchicalTable from the existing view. The two views will share state used for caching snapshot data, but the server implementation may limit parallelism when performing snapshots for either view. Use this field when you intend to stop using the existing view and instead begin to use the result view. |
|
expansions | HierarchicalTableViewKeyTableDescriptor | Description for the expansions that define this view of the HierarchicalTable. If not present, the result will have default expansions, For RollupTables this will be the root (single row, top-level aggregation) and the next level if one exists (that is, if there are one or more group-by columns, or constituents are included). For TreeTables, this will be the root (one row for each child of the "null" parent identifier). |
Deliberately empty response, use /FlightService/DoExchange to snapshot or subscribe to snapshots from the result
result_view_id
Field | Type | Label | Description |
result_rollup_table_id | Ticket | Ticket to use to hold the result RollupTable from the rollup operation |
|
source_table_id | Ticket | Ticket for the source Table to rollup |
|
aggregations | Aggregation | repeated | The aggregations that should be applied at each level of the rollup |
include_constituents | bool | Whether to include the leaf-level constituents in the result |
|
group_by_columns | string | repeated | The names of the columns to rollup by |
Deliberately empty response, use /ObjectService/FetchObject to access the result_rollup_table_id ticket as
a HierarchicalTableDescriptor. See HierarchicalTableDescriptor documentation for details.
Field | Type | Label | Description |
result_tree_table_id | Ticket | Ticket to use to hold the result TreeTable from the tree operation |
|
source_table_id | Ticket | Ticket for the source Table to tree |
|
identifier_column | string | The name of the column containing the unique identifier for each row in the source table |
|
parent_identifier_column | string | The name of the column containing the parent row's unique identifier for each row in the source table |
|
promote_orphans | bool | Whether to promote "orphaned" nodes to be children of the root node. Orphans are nodes whose parent identifiers do not occur as identifiers for any row in the source Table. |
Deliberately empty response, use /ObjectService/FetchObject to access the result_tree_table_id ticket as
a HierarchicalTableDescriptor. See HierarchicalTableDescriptor documentation for details.
This service provides tools to create and view hierarchical tables (rollups and trees).
Method Name | Request Type | Response Type | Description |
Rollup | RollupRequest | RollupResponse | Applies a rollup operation to a Table and exports the resulting RollupTable |
Tree | TreeRequest | TreeResponse | Applies a tree operation to a Table and exports the resulting TreeTable |
Apply | HierarchicalTableApplyRequest | HierarchicalTableApplyResponse | Applies operations to an existing HierarchicalTable (RollupTable or TreeTable) and exports the resulting HierarchicalTable |
View | HierarchicalTableViewRequest | HierarchicalTableViewResponse | Creates a view associating a Table of expansion keys and actions with an existing HierarchicalTable and exports the resulting HierarchicalTableView for subsequent snapshot or subscription requests |
ExportSource | HierarchicalTableSourceExportRequest | ExportedTableCreationResponse | Exports the source Table for a HierarchicalTable (Rollup or TreeTable) |
Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
Field | Type | Label | Description |
input_table | Ticket |
|
|
table_to_add | Ticket |
|
Field | Type | Label | Description |
input_table | Ticket |
|
|
table_to_remove | Ticket |
|
This service offers methods to manipulate the contents of input tables.
Method Name | Request Type | Response Type | Description |
AddTableToInputTable | AddTableRequest | AddTableResponse | Adds the provided table to the specified input table. The new data to add must only have columns (name, types, and order) which match the given input table's columns. |
DeleteTableFromInputTable | DeleteTableRequest | DeleteTableResponse | Removes the provided table from the specified input tables. The tables indicating which rows to remove are expected to only have columns that match the key columns of the input table. |
Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
A generic payload sent from the client to the server. The specific requirements and
guarantees are defined by the specific plugin.
Field | Type | Label | Description |
payload | bytes | The payload, may be empty. |
|
references | TypedTicket | repeated | The typed references, may be empty. These references may be any ticket, resolved or not. This lets the client reference objects that already exist on the server or are still pending. Note that pending tickets require the server to wait until that object exists before passing this request to the server plugin, and since messages are always processed in order, later requests will also be delayed. |
First payload to send on a MessageStream, indicating the object to connect to
on the server.
Field | Type | Label | Description |
source_id | TypedTicket |
|
Field | Type | Label | Description |
source_id | TypedTicket |
|
Field | Type | Label | Description |
type | string |
|
|
data | bytes |
|
|
typed_export_ids | TypedTicket | repeated |
|
A generic payload sent from the server to the client. The specific requirements and
guarantees of this are defined by the specific plugin.
Field | Type | Label | Description |
payload | bytes | The payload, may be empty. |
|
exported_references | TypedTicket | repeated | The exported references, may be empty. To correctly free up unused server resources, clients must take care to release these exports when they will no longer be used. A reference may be missing a type, meaning that the object cannot be used as the source_id for a ConnectRequest, but it may still be passed back to the server as part of ClientData references, and it still needs to be released when no longer used. |
Client payload for the MessageStream.
Field | Type | Label | Description |
connect | ConnectRequest | Indicates that this is the first request of the stream, asking to connect to a specific object on the server. |
|
data | ClientData | Data to pass to the object on the server. |
Server responses to the client. Currently can only be ServerData messages.
Field | Type | Label | Description |
data | ServerData | Data to pass to the client about the object on the server. |
Method Name | Request Type | Response Type | Description |
FetchObject | FetchObjectRequest | FetchObjectResponse | Fetches a server-side object as a binary payload and assorted other tickets pointing at other server-side objects that may need to be read to properly use this payload. The binary format is implementation specific, but the implementation should be specified by the "type" identifier in the typed ticket. Deprecated in favor of MessageStream, which is able to handle the same content. |
MessageStream | StreamRequest stream | StreamResponse stream | Provides a generic stream feature for Deephaven instances to use to add arbitrary functionality. Presently these take the form of "object type plugins", where server-side code can specify how an object could be serialized and/or communicate with a client. This gRPC stream is somewhat lower level than the plugin API, giving the server and client APIs features to correctly establish and control the stream. At this time, this is limited to a "ConnectRequest" to start the call. The first message sent to the server is expected to have a ConnectRequest, indicating which export ticket to connect to. It is an error for the client to attempt to connect to an object that has no plugin for its object type installed. The first request sent by the client should be a ConnectRequest. No other client message should be sent until the server responds. The server will respond with Data as soon as it is able (i.e. once the object in question has been resolved and the plugin has responded), indicating that the request was successful. After that point, the client may send Data requests. All replies from the server to the client contain Data instances. When sent from the server to the client, Data contains a bytes payload created by the server implementation of the plugin, and server-created export tickets containing any object references specified to be sent by the server-side plugin. As server-created exports, they are already resolved, and can be fetched or otherwise referenced right away. The client API is expected to wrap those tickets in appropriate objects, and the client is expected to release those tickets as appropriate, according to the plugin's use case. Note that it is possible for the "type" field to be null, indicating that there is no corresponding ObjectType plugin for these exported objects. This limits the client to specifying those tickets in a subsequent request, or releasing the ticket to let the object be garbage collected on the server. All Data instances sent from the client likewise contain a bytes payload, and may contain references to objects that already exist or may soon exist on the server, not just tickets sent by this same plugin. Note however that if those tickets are not yet resolved, neither the current Data nor subsequent requests can be processed by the plugin, as the required references can't be resolved. Presently there is no explicit "close" message to send, but plugin implementations can devise their own "half-close" protocol if they so choose. For now, if one end closes the connection, the other is expected to follow suit by closing their end too. At present, if there is an error with the stream, it is conveyed to the client in the usual gRPC fashion, but the server plugin will only be informed that the stream closed. |
OpenMessageStream | StreamRequest | StreamResponse stream | Half of the browser-based (browser's can't do bidirectional streams without websockets) implementation for MessageStream. |
NextMessageStream | StreamRequest | BrowserNextResponse | Other half of the browser-based implementation for MessageStream. |
Method Name | Option |
FetchObject | true |
Field | Type | Label | Description |
partitioned_table | Ticket | The ticket for the PartitionedTable object to query. |
|
key_table_ticket | Ticket | The ticket for the table containing the key to fetch from the partitioned table. |
|
result_id | Ticket | The ticket to use to hold the newly returned table. |
Field | Type | Label | Description |
partitioned_table | Ticket | The ticket for the PartitionedTable object to merge. |
|
result_id | Ticket | The ticket to use to hold the results of the merge operation. |
Field | Type | Label | Description |
table_id | Ticket |
|
|
result_id | Ticket |
|
|
key_column_names | string | repeated |
|
drop_keys | bool |
|
Deliberately empty response, use /ObjectService/FetchObject to read the object by ticket.
A message that describes a partitioned table, able to be sent as a plugin object to a client.
This object will also come with a ticket to the underlying table that can be used to get the
constituent tables by key.
Field | Type | Label | Description |
key_column_names | string | repeated | The names of the key columns. The underlying table will contain these columns - a client can subscribe to these columns to see what keys are present. |
constituent_column_name | string | The name of the column in the underlying table that contains the table represented by that row. |
|
unique_keys | bool | True if the keys will be unique, so any set of known keys can be queried using GetTable. |
|
constituent_definition_schema | bytes | Returns a flight Messsage wrapping a Schema that will describe every table contained in this PartitionedTable. |
|
constituent_changes_permitted | bool | True if the underlying table may tick with updates. See PartitionedTable.constituentChangesPermitted() for more details. |
This service provides tools to create and query partitioned tables.
Method Name | Request Type | Response Type | Description |
PartitionBy | PartitionByRequest | PartitionByResponse | Transforms a table into a partitioned table, consisting of many separate tables, each individually addressable. The result will be a FetchObjectResponse populated with a PartitionedTable. |
Merge | MergeRequest | ExportedTableCreationResponse | Given a partitioned table, returns a table with the contents of all of the constituent tables. |
GetTable | GetTableRequest | ExportedTableCreationResponse | Given a partitioned table and a row described by another table's contents, returns a table that matched that row, if any. If none is present, NOT_FOUND will be sent in response. If more than one is present, FAILED_PRECONDITION will be sent in response. If the provided key table has any number of rows other than one, INVALID_ARGUMENT will be sent in response. The simplest way to generally use this is to subscribe to the key columns of the underlying table of a given PartitionedTable, then use /FlightService/DoPut to create a table with the desired keys, and pass that ticket to this service. After that request is sent (note that it is not required to wait for it to complete), that new table ticket can be used to make this GetTable request. |
Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
Intentionally empty and is here for backwards compatibility should this API change.
Field | Type | Label | Description |
ticket | Ticket |
|
|
export_state | ExportNotification.State |
|
|
context | string | any errors will include an id that can be used to find details of the error in the logs |
|
dependent_handle | string | will be set to an identifier of the dependency that cascaded the error if applicable |
Intentionally empty and is here for backwards compatibility should this API change.
Field | Type | Label | Description |
source_id | Ticket |
|
|
result_id | Ticket |
|
Intentionally empty and is here for backwards compatibility should this API change.
The request that a client provides to a server on handshake.
Field | Type | Label | Description |
auth_protocol | sint32 | Deprecated. A defined protocol version. Deephaven's OSS protocols are as follows: - protocol = 0: most recent HandshakeResponse payload - protocol = 1: payload is BasicAuth |
|
payload | bytes | Deprecated. Arbitrary auth/handshake info. |
Name | Option |
auth_protocol | true |
payload | true |
Servers respond with information needed to make subsequent requests tied to this session.
The session token should be refreshed prior to the deadline, which is represented as milliseconds since the
epoch. Clients are encouraged to use the expiration delay and cookie deadline to determine a good time to refresh.
Field | Type | Label | Description |
metadata_header | bytes | Deprecated. The metadata header to identify the session. This value is static and defined via configuration. |
|
session_token | bytes | Deprecated. Arbitrary session_token to assign to the value to the provided metadata header. |
|
token_deadline_time_millis | sint64 | Deprecated. When this session_token will be considered invalid by the server. |
|
token_expiration_delay_millis | sint64 | Deprecated. The length of time that this token was intended to live. Note that `refreshSessionToken` may return the existing token to reduce overhead and to prevent denial-of-service caused by refreshing too frequently. |
Name | Option |
metadata_header | true |
session_token | true |
token_deadline_time_millis | true |
token_expiration_delay_millis | true |
Field | Type | Label | Description |
source_id | Ticket |
|
|
result_id | Ticket |
|
Intentionally empty and is here for backwards compatibility should this API change.
Field | Type | Label | Description |
id | Ticket |
|
Intentionally empty and is here for backwards compatibility should this API change.
Intentionally empty and is here for backwards compatibility should this API change.
Field | Type | Label | Description |
abnormal_termination | bool | whether or not this termination is expected |
|
reason | string | if additional information is available then provide it in this field |
|
is_from_uncaught_exception | bool | if this is due to an exception, whether or not it was uncaught |
|
stack_traces | TerminationNotificationResponse.StackTrace | repeated | if applicable, the list of stack traces in reverse causal order |
Field | Type | Label | Description |
type | string |
|
|
message | string |
|
|
elements | string | repeated |
|
Field | Type | Label | Description |
type | string | The type of the protobuf the auth payload protobuf. |
|
payload | bytes | The serialized payload of the protobuf instance. |
Name | Number | Description |
UNKNOWN | 0 | This item is a dependency, but hasn't been registered yet. |
PENDING | 1 | This item has pending dependencies. |
PUBLISHING | 2 | This item is a client-supplied dependency with no guarantee on timing to EXPORT state. |
QUEUED | 3 | This item is eligible for resolution and has been submitted to the executor. |
RUNNING | 4 | This item is now executing. |
EXPORTED | 5 | This item was successfully exported and is currently being retained. |
RELEASED | 6 | This item was successfully released. |
CANCELLED | 7 | CANCELLED: The user cancelled the item before it exported. |
FAILED | 8 | This item had a specific error. |
DEPENDENCY_FAILED | 9 | One of this item's dependencies had an internal error before it exported. |
DEPENDENCY_NEVER_FOUND | 10 | One of this item's dependencies was already released or never submitted within the out-of-order window. |
DEPENDENCY_CANCELLED | 11 | Dependency was cancelled, causing a cascading cancel that applies to this export. |
DEPENDENCY_RELEASED | 12 | Dependency was already released, causing a cascading failure that applies to this export. |
User supplied Flight.Ticket(s) should begin with 'e' byte followed by an signed little-endian int. The client is only
allowed to use the positive exportId key-space (client generated exportIds should be greater than 0). The client is
encouraged to use a packed ranges of ids as this yields the smallest footprint server side for long running sessions.
The client is responsible for releasing all Flight.Tickets that they create or that were created for them via a gRPC
call. The documentation for the gRPC call will indicate that the exports must be released. Exports that need to be
released will always be communicated over the session's ExportNotification stream.
When a session ends, either explicitly or due to timeout, all exported objects in that session are released
automatically.
Some parts of the API return a Flight.Ticket that does not need to be released. It is not an error to attempt to
release them.
Method Name | Request Type | Response Type | Description |
NewSession | HandshakeRequest | HandshakeResponse | Handshake between client and server to create a new session. The response includes a metadata header name and the token to send on every subsequent request. The auth mechanisms here are unary to best support grpc-web. Deprecated: Please use Flight's Handshake or http authorization headers instead. |
RefreshSessionToken | HandshakeRequest | HandshakeResponse | Keep-alive a given token to ensure that a session is not cleaned prematurely. The response may include an updated token that should replace the existing token for subsequent requests. Deprecated: Please use Flight's Handshake with an empty payload. |
CloseSession | HandshakeRequest | CloseSessionResponse | Proactively close an open session. Sessions will automatically close on timeout. When a session is closed, all unreleased exports will be automatically released. |
Release | ReleaseRequest | ReleaseResponse | Attempts to release an export by its ticket. Returns true if an existing export was found. It is the client's responsibility to release all resources they no longer want the server to hold on to. Proactively cancels work; do not release a ticket that is needed by dependent work that has not yet finished (i.e. the dependencies that are staying around should first be in EXPORTED state). |
ExportFromTicket | ExportRequest | ExportResponse | Makes a copy from a source ticket to a client managed result ticket. The source ticket does not need to be a client managed ticket. |
PublishFromTicket | PublishRequest | PublishResponse | Makes a copy from a source ticket and publishes to a result ticket. Neither the source ticket, nor the destination ticket, need to be a client managed ticket. |
ExportNotifications | ExportNotificationRequest | ExportNotification stream | Establish a stream to manage all session exports, including those lost due to partially complete rpc calls. New streams will flush notifications for all un-released exports, prior to seeing any new or updated exports for all live exports. After the refresh of existing state, subscribers will receive notifications of new and updated exports. An export id of zero will be sent to indicate all pre-existing exports have been sent. |
TerminationNotification | TerminationNotificationRequest | TerminationNotificationResponse | Receive a best-effort message on-exit indicating why this server is exiting. Reception of this message cannot be guaranteed. |
Method Name | Option |
NewSession | true |
RefreshSessionToken | true |
Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
Field | Type | Label | Description |
path | string | The path to the directory to create |
Field | Type | Label | Description |
path | string | The path to the item to delete. |
Field | Type | Label | Description |
path | string | The path to the file to read |
|
etag | string | optional | If present, tells the server to not send a result if the etag matches the current file's content. |
Field | Type | Label | Description |
contents | bytes | Contains the contents of the file, unless the returned etag matches the requested etag. |
|
etag | string | optional | Represents the current etag of the requested file. If an etag was in the request and this matches, contents should be ignored, and the existing client copy of the file is already correct. In all other cases, this etag can be used in future requests to see if the file's contents are different. |
Field | Type | Label | Description |
path | string | The path to the item that this message describes. |
|
type | ItemType | The type of this item, either file or directory. |
|
size | sint64 | If this message represents a file, this is the size of the file. |
|
etag | string | optional | Opaque string value representing a hash of the contents of this file, if available. |
Field | Type | Label | Description |
path | string | The path to the directory to list. empty to list top level |
|
filter_glob | string | optional | A pattern to filter for, with "?" to match any one character, "*" to match any number of characters, and "{}"s to hold a comma-separated list of possible matches. The format follows Java's FileSystem.getPathMatcher (see https://docs.oracle.com/javase/8/docs/api/java/nio/file/FileSystem.html#getPathMatcher-java.lang.String-), except without allowing subdirectories with / or **. |
Field | Type | Label | Description |
items | ItemInfo | repeated | List of items found in the specified directory. |
canonical_path | string | The canonical path of the listed directory. This is useful to recognize the basename of the items in a cross-platform way. |
Requests to move a file to a new path, which may be in a different directory. Presently it is not
permitted to overwrite an existing file in this way.
Field | Type | Label | Description |
old_path | string | The path where the file currently exists |
|
new_path | string | The path where the file should be moved to |
|
allow_overwrite | bool | True to permit replacing an existing file, false to require that no file already exists with that name. |
Field | Type | Label | Description |
allow_overwrite | bool | True to permit replacing an existing file, false to require that no file already exists with that name. |
|
path | string | The path to the file to write contents to |
|
contents | bytes | The contents to use when creating then file, or to use to replace the file. |
Field | Type | Label | Description |
etag | string | optional | Represents the etag of the saved contents, so the client can check for external changes. |
Name | Number | Description |
UNKNOWN | 0 | Should not be used, exists only to indicate that this was left unset |
DIRECTORY | 1 | |
FILE | 2 |
Shared storage management service.
Operations may fail (or omit data) if the current session does not have permission to read or write that resource.
Paths will be "/" delimited and must start with a leading slash.
Method Name | Request Type | Response Type | Description |
ListItems | ListItemsRequest | ListItemsResponse | Lists the files and directories present in a given directory. Will return an error |
FetchFile | FetchFileRequest | FetchFileResponse | Reads the file at the given path. Client can optionally specify an etag, asking the server not to send the file if it hasn't changed. |
SaveFile | SaveFileRequest | SaveFileResponse | Can create new files or modify existing with client provided contents. |
MoveItem | MoveItemRequest | MoveItemResponse | Moves a file from one path to another. |
CreateDirectory | CreateDirectoryRequest | CreateDirectoryResponse | Creates a directory at the given path. |
DeleteItem | DeleteItemRequest | DeleteItemResponse | Deletes the file or directory at the given path. Directories must be empty to be deleted. |
Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
Field | Type | Label | Description |
abs_sum | AggSpec.AggSpecAbsSum |
|
|
approximate_percentile | AggSpec.AggSpecApproximatePercentile |
|
|
avg | AggSpec.AggSpecAvg |
|
|
count_distinct | AggSpec.AggSpecCountDistinct |
|
|
distinct | AggSpec.AggSpecDistinct |
|
|
first | AggSpec.AggSpecFirst |
|
|
formula | AggSpec.AggSpecFormula |
|
|
freeze | AggSpec.AggSpecFreeze |
|
|
group | AggSpec.AggSpecGroup |
|
|
last | AggSpec.AggSpecLast |
|
|
max | AggSpec.AggSpecMax |
|
|
median | AggSpec.AggSpecMedian |
|
|
min | AggSpec.AggSpecMin |
|
|
percentile | AggSpec.AggSpecPercentile |
|
|
sorted_first | AggSpec.AggSpecSorted |
|
|
sorted_last | AggSpec.AggSpecSorted |
|
|
std | AggSpec.AggSpecStd |
|
|
sum | AggSpec.AggSpecSum |
|
|
t_digest | AggSpec.AggSpecTDigest |
|
|
unique | AggSpec.AggSpecUnique |
|
|
weighted_avg | AggSpec.AggSpecWeighted |
|
|
weighted_sum | AggSpec.AggSpecWeighted |
|
|
var | AggSpec.AggSpecVar |
|
Field | Type | Label | Description |
percentile | double | Percentile. Must be in range [0.0, 1.0]. |
|
compression | double | optional | T-Digest compression factor. Must be greater than or equal to 1. 1000 is extremely large. When not specified, the server will choose a compression value. |
Field | Type | Label | Description |
count_nulls | bool | Whether null input values should be included when counting the distinct input values. |
Field | Type | Label | Description |
include_nulls | bool | Whether null input values should be included in the distinct output values. |
Field | Type | Label | Description |
formula | string | The formula to use to calculate output values from grouped input values. |
|
param_token | string | The formula parameter token to be replaced with the input column name for evaluation. |
Field | Type | Label | Description |
average_evenly_divided | bool | Whether to average the highest low-bucket value and lowest high-bucket value, when the low-bucket and high-bucket are of equal size. Only applies to numeric types. |
Field | Type | Label | Description |
null_value | NullValue |
|
|
string_value | string |
|
|
int_value | sint32 |
|
|
long_value | sint64 |
|
|
float_value | float |
|
|
double_value | double |
|
|
bool_value | bool |
|
|
byte_value | sint32 | Expected to be in range [Byte.MIN_VALUE, Byte.MAX_VALUE] |
|
short_value | sint32 | Expected to be in range [Short.MIN_VALUE, Short.MAX_VALUE] |
|
char_value | sint32 | Expected to be in range [0x0000, 0xFFFF] TODO(deephaven-core#3212): Expand AggSpecNonUniqueSentinel types |
Field | Type | Label | Description |
percentile | double | The percentile to calculate. Must be in the range [0.0, 1.0]. |
|
average_evenly_divided | bool | Whether to average the highest low-bucket value and lowest high-bucket value, when the low-bucket and high-bucket are of equal size. Only applies to numeric types. |
Field | Type | Label | Description |
columns | AggSpec.AggSpecSortedColumn | repeated | Using a message instead of string to support backwards-compatibility in the future |
Field | Type | Label | Description |
column_name | string | TODO(deephaven-core#821): SortedFirst / SortedLast aggregations with sort direction |
Field | Type | Label | Description |
compression | double | optional | T-Digest compression factor. Must be greater than or equal to 1. 1000 is extremely large. When not specified, the server will choose a compression value. |
Field | Type | Label | Description |
include_nulls | bool | Whether to include null values as a distinct value for determining if there is only one unique value to output |
|
non_unique_sentinel | AggSpec.AggSpecNonUniqueSentinel | The output value to use for groups that don't have a single unique input value |
Field | Type | Label | Description |
weight_column | string | Column name for the source of input weights. |
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
spec | AggSpec |
|
|
group_by_columns | string | repeated |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
initial_groups_id | TableReference | A table whose distinct combinations of values for the group_by_columns should be used to create an initial set of aggregation groups. All other columns are ignored. This is useful in combination with preserve_empty == true to ensure that particular groups appear in the result table, or with preserve_empty == false to control the encounter order for a collection of groups and thus their relative order in the result. Changes to initial_group_ids are not expected or handled; if initial_groups_id is a refreshing table, only its contents at instantiation time will be used. If initial_groups_id is not present, the result will be the same as if a table with no rows was supplied. |
|
preserve_empty | bool | Whether to keep result rows for groups that are initially empty or become empty as a result of updates. Each aggregation operator defines its own value for empty groups. |
|
aggregations | Aggregation | repeated |
|
group_by_columns | string | repeated |
|
Field | Type | Label | Description |
columns | Aggregation.AggregationColumns |
|
|
count | Aggregation.AggregationCount |
|
|
first_row_key | Aggregation.AggregationRowKey |
|
|
last_row_key | Aggregation.AggregationRowKey |
|
|
partition | Aggregation.AggregationPartition |
|
|
formula | Aggregation.AggregationFormula |
|
Field | Type | Label | Description |
spec | AggSpec |
|
|
match_pairs | string | repeated |
|
Field | Type | Label | Description |
column_name | string | The output column name |
Field | Type | Label | Description |
selectable | Selectable |
|
Field | Type | Label | Description |
column_name | string |
|
|
include_group_by_columns | bool |
|
Field | Type | Label | Description |
column_name | string |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
left_id | TableReference |
|
|
right_id | TableReference |
|
|
exact_match_columns | string | repeated |
|
as_of_column | string | This is a comparison expression for the inexact as-of join match. In the case of an as-of join (aj), the comparison operator can be either ">=" or ">"; for example, "Foo>=Bar" or "Foo>Bar". In the case of a reverse-as-of join (raj), the comparison operator can be either "<=" or "<"; for example, "Foo<=Bar" or "Foo<Bar". In the case where the column name exists in both tables, the single column name can be used and it will inherit the default comparison operator: in the aj case, "Foo" is equivalent to "Foo>=Foo"; in the raj case, "Foo" is equivalent to "Foo<=Foo". |
|
columns_to_add | string | repeated |
|
merge AND and OR into one and give them an "operation"?
Field | Type | Label | Description |
filters | Condition | repeated |
|
Field | Type | Label | Description |
source_id | TableReference |
|
|
result_id | Ticket |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
left_id | TableReference |
|
|
right_id | TableReference |
|
|
columns_to_match | string | repeated |
|
columns_to_add | string | repeated |
|
as_of_match_rule | AsOfJoinTablesRequest.MatchRule | Direction to search to find a match. LESS_THAN_EQUAL and LESS_THAN will be used to make a Table.aj() call, and GREATER_THAN_EQUAL and GREATER_THAN will be used to make a Table.raj() call. |
Field | Type | Label | Description |
ops | BatchTableRequest.Operation | repeated |
|
Name | Option |
as_of_join | true |
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
column_name | string | The name of the column in the source table to read when generating statistics. |
|
unique_value_limit | int32 | optional | For non-numeric, non-date types, specify the max number of unique values to return, sorted by popularity. Leave unset to use server default, specify zero to skip. |
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
aggregates | ComboAggregateRequest.Aggregate | repeated |
|
group_by_columns | string | repeated |
|
force_combo | bool | don't use direct single-aggregate table operations even if there is only a single aggregate |
Field | Type | Label | Description |
type | ComboAggregateRequest.AggType |
|
|
match_pairs | string | repeated | used in all aggregates except countBy |
column_name | string | countBy result (output) column OR weighted avg weight (input) column, otherwise unused |
|
percentile | double | required by percentileBy aggregates, otherwise unused |
|
avg_median | bool | used in percentileBy only |
Field | Type | Label | Description |
operation | CompareCondition.CompareOperation |
|
|
case_sensitivity | CaseSensitivity |
|
|
lhs | Value |
|
|
rhs | Value |
|
Field | Type | Label | Description |
and | AndCondition |
|
|
or | OrCondition |
|
|
not | NotCondition |
|
|
compare | CompareCondition |
|
|
in | InCondition |
|
|
invoke | InvokeCondition |
|
|
is_null | IsNullCondition |
|
|
matches | MatchesCondition |
|
|
contains | ContainsCondition |
|
|
search | SearchCondition |
|
Field | Type | Label | Description |
reference | Reference |
|
|
search_string | string |
|
|
case_sensitivity | CaseSensitivity |
|
|
match_type | MatchType |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_table_id | TableReference | Optional, either this or schema must be specified, not both. |
|
schema | bytes | Schema as described in Arrow Message.fbs::Message. Optional, either this or source_table_id must be specified. |
|
kind | CreateInputTableRequest.InputTableKind | Specifies what type of input table to create. |
Field | Type | Label | Description |
in_memory_append_only | CreateInputTableRequest.InputTableKind.InMemoryAppendOnly |
|
|
in_memory_key_backed | CreateInputTableRequest.InputTableKind.InMemoryKeyBacked |
|
|
blink | CreateInputTableRequest.InputTableKind.Blink |
|
Creates an in-memory append-only table - rows cannot be modified or deleted.
Creates an in-memory table that supports updates and deletes by keys.
Field | Type | Label | Description |
key_columns | string | repeated |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
left_id | TableReference |
|
|
right_id | TableReference |
|
|
columns_to_match | string | repeated |
|
columns_to_add | string | repeated |
|
reserve_bits | int32 | the number of bits of key-space to initially reserve per group; default is 10 |
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
column_names | string | repeated |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
size | sint64 |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
left_id | TableReference |
|
|
right_id | TableReference |
|
|
columns_to_match | string | repeated |
|
columns_to_add | string | repeated |
|
Field | Type | Label | Description |
result_id | TableReference |
|
|
success | bool | If this is part of a batch, you may receive creation messages that indicate the sub-operation failed. |
|
error_info | string | If this is part of a batch, this errorInfo will be the message provided |
|
schema_header | bytes | Schema as described in Arrow Message.fbs::Message. |
|
is_static | bool | Whether or not this table might change. |
|
size | sint64 | The current number of rows for this table. If this is negative, the table isn't coalesced, meaning the size isn't known without scanning partitions. Typically, the client should filter the data by the partitioning columns first. |
Field | Type | Label | Description |
export_id | Ticket |
|
|
size | sint64 |
|
|
update_failure_message | string |
|
Intentionally empty and is here for backwards compatibility should this API change.
Field | Type | Label | Description |
source_id | TableReference |
|
|
result_id | Ticket |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
filters | Condition | repeated |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
num_rows | sint64 |
|
|
group_by_column_specs | string | repeated |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
num_rows | sint64 |
|
Field | Type | Label | Description |
target | Value |
|
|
candidates | Value | repeated |
|
case_sensitivity | CaseSensitivity |
|
|
match_type | MatchType |
|
Field | Type | Label | Description |
method | string |
|
|
target | Value |
|
|
arguments | Value | repeated |
|
Field | Type | Label | Description |
reference | Reference |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
left_id | TableReference |
|
|
right_id | TableReference |
|
|
columns_to_match | string | repeated |
|
columns_to_add | string | repeated |
|
Field | Type | Label | Description |
string_value | string |
|
|
double_value | double |
|
|
bool_value | bool |
|
|
long_value | sint64 |
|
|
nano_time_value | sint64 | nanos since the epoch |
Field | Type | Label | Description |
reference | Reference |
|
|
regex | string |
|
|
case_sensitivity | CaseSensitivity |
|
|
match_type | MatchType |
|
Field | Type | Label | Description |
precision | sint32 |
|
|
rounding_mode | MathContext.RoundingMode |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_ids | TableReference | repeated |
|
key_column | string | if specified, the result will be sorted by this column |
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
Field | Type | Label | Description |
source_id | TableReference | The source table to include in the multi-join output table. |
|
columns_to_match | string | repeated | The key columns to match; may be renamed to match other source table key columns. |
columns_to_add | string | repeated | The columns from the source table to include; if not provided, all columns are included. |
Field | Type | Label | Description |
result_id | Ticket |
|
|
multi_join_inputs | MultiJoinInput | repeated | The source table input specifications. One or more must be provided. |
Field | Type | Label | Description |
result_id | Ticket |
|
|
left_id | TableReference |
|
|
right_id | TableReference |
|
|
columns_to_match | string | repeated |
|
columns_to_add | string | repeated |
|
Field | Type | Label | Description |
filter | Condition |
|
Field | Type | Label | Description |
filters | Condition | repeated |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
left_id | TableReference |
|
|
right_id | TableReference |
|
|
exact_match_columns | string | repeated |
|
left_start_column | string | Provide detailed range match parameters for the range join (alternative to providing `range_match`) |
|
range_start_rule | RangeJoinTablesRequest.RangeStartRule |
|
|
right_range_column | string |
|
|
range_end_rule | RangeJoinTablesRequest.RangeEndRule |
|
|
left_end_column | string |
|
|
aggregations | Aggregation | repeated |
|
range_match | string | Specifies the range match parameters as a parseable string. Providing `range_match` in the GRPC call is the alternative to detailed range match parameters provided in the `left_start_column`, `range_start_rule`, `right_range_column`, `range_end_rule`, and `left_end_column` fields. |
Field | Type | Label | Description |
column_name | string |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
pixel_count | int32 |
|
|
zoom_range | RunChartDownsampleRequest.ZoomRange |
|
|
x_column_name | string |
|
|
y_column_names | string | repeated |
|
Field | Type | Label | Description |
min_date_nanos | int64 | optional |
|
max_date_nanos | int64 | optional |
|
search
Field | Type | Label | Description |
search_string | string |
|
|
optional_references | Reference | repeated |
|
Field | Type | Label | Description |
source_id | Ticket |
|
|
starting_row | sint64 |
|
|
column_name | string |
|
|
seek_value | Literal |
|
|
insensitive | bool |
|
|
contains | bool |
|
|
is_backward | bool |
|
Field | Type | Label | Description |
result_row | sint64 |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
column_names | string | repeated |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
column_specs | string | repeated |
|
Field | Type | Label | Description |
raw | string | ColumnExpression column_expression = 2; |
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
first_position_inclusive | sint64 |
|
|
last_position_exclusive | sint64 |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
base_id | TableReference | The base table. |
|
trigger_id | TableReference | The trigger table. |
|
initial | bool | Whether the results should contain an initial snapshot. |
|
incremental | bool | Whether the results should be incremental. |
|
history | bool | Whether the results should keep history. |
|
stamp_columns | string | repeated | Which columns to stamp from the trigger table. If empty, all columns from the trigger table are stamped. Allows renaming columns. |
Field | Type | Label | Description |
column_name | string |
|
|
is_absolute | bool |
|
|
direction | SortDescriptor.SortDirection |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
sorts | SortDescriptor | repeated |
|
Field | Type | Label | Description |
ticket | Ticket | A ticket to resolve to get the table. It's preferable to use export tickets in order to avoid races that are possible with tickets controlled by the server, but any ticket type will suffice as long as it resolves to a table. |
|
batch_offset | sint32 | An offset into a BatchRequest's ops field, used to reference an intermediate operation which may not have been exported. Only valid to set when used in the context of a BatchRequest. |
Field | Type | Label | Description |
result_id | Ticket |
|
|
start_time_nanos | sint64 |
|
|
start_time_string | string |
|
|
period_nanos | sint64 |
|
|
period_string | string |
|
|
blink_table | bool |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
null_fill | bool |
|
|
columns_to_ungroup | string | repeated |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
filters | string | repeated |
|
Reusable options for the UpdateBy delta operation.
Field | Type | Label | Description |
null_behavior | UpdateByNullBehavior |
|
Reusable options for the UpdateBy exponential moving operations.
Field | Type | Label | Description |
on_null_value | BadDataBehavior |
|
|
on_nan_value | BadDataBehavior |
|
|
on_null_time | BadDataBehavior |
|
|
on_negative_delta_time | BadDataBehavior |
|
|
on_zero_delta_time | BadDataBehavior |
|
|
big_value_context | MathContext |
|
Create a table with the same rowset as its parent that will perform the specified set of row
based operations to it. As opposed to {@link #update(String...)} these operations are more restricted but are
capable of processing state between rows. This operation will group the table by the specified set of keys if
provided before applying the operation.
Field | Type | Label | Description |
result_id | Ticket |
|
|
source_id | TableReference |
|
|
options | UpdateByRequest.UpdateByOptions |
|
|
operations | UpdateByRequest.UpdateByOperation | repeated |
|
group_by_columns | string | repeated |
|
Field | Type | Label | Description |
column | UpdateByRequest.UpdateByOperation.UpdateByColumn |
|
Field | Type | Label | Description |
spec | UpdateByRequest.UpdateByOperation.UpdateByColumn.UpdateBySpec |
|
|
match_pairs | string | repeated |
|
Field | Type | Label | Description |
options | UpdateByDeltaOptions |
|
Field | Type | Label | Description |
options | UpdateByEmOptions |
|
|
window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
options | UpdateByEmOptions |
|
|
window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
options | UpdateByEmOptions |
|
|
window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
options | UpdateByEmOptions |
|
|
window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
options | UpdateByEmOptions |
|
|
window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
|
formula | string |
|
|
param_token | string |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
Field | Type | Label | Description |
reverse_window_scale | UpdateByWindowScale |
|
|
forward_window_scale | UpdateByWindowScale |
|
|
weight_column | string | Column name for the source of input weights. |
Field | Type | Label | Description |
use_redirection | bool | optional | If redirections should be used for output sources instead of sparse array sources. If unset, defaults to server-provided defaults. |
chunk_capacity | int32 | optional | The maximum chunk capacity. If unset, defaults to server-provided defaults. |
max_static_sparse_memory_overhead | double | optional | The maximum fractional memory overhead allowable for sparse redirections as a fraction (e.g. 1.1 is 10% overhead). Values less than zero disable overhead checking, and result in always using the sparse structure. A value of zero results in never using the sparse structure. If unset, defaults to server-provided defaults. |
initial_hash_table_size | int32 | optional | The initial hash table size. If unset, defaults to server-provided defaults. |
maximum_load_factor | double | optional | The maximum load factor for the hash table. If unset, defaults to server-provided defaults. |
target_load_factor | double | optional | The target load factor for the hash table. If unset, defaults to server-provided defaults. |
math_context | MathContext | The math context. |
Reusable window scale message for the UpdateBy rolling operations.
Field | Type | Label | Description |
ticks | UpdateByWindowScale.UpdateByWindowTicks |
|
|
time | UpdateByWindowScale.UpdateByWindowTime |
|
Field | Type | Label | Description |
ticks | double |
|
Field | Type | Label | Description |
column | string |
|
|
nanos | sint64 |
|
|
duration_string | string |
|
could also inline this to each place that uses it
Field | Type | Label | Description |
reference | Reference |
|
|
literal | Literal |
|
Field | Type | Label | Description |
result_id | Ticket |
|
|
left_id | TableReference |
|
|
right_id | TableReference |
|
|
inverted | bool | When true, becomes a "where not in" request |
|
columns_to_match | string | repeated |
|
Name | Number | Description |
LESS_THAN_EQUAL | 0 | |
LESS_THAN | 1 | |
GREATER_THAN_EQUAL | 2 | |
GREATER_THAN | 3 |
Directives for how to handle {@code null} and {@code NaN} values
Name | Number | Description |
BAD_DATA_BEHAVIOR_NOT_SPECIFIED | 0 | When not specified will use the server default. |
THROW | 1 | Throw an exception and abort processing when bad data is encountered. |
RESET | 2 | Reset the state for the bucket to {@code null} when invalid data is encountered. |
SKIP | 3 | Skip and do not process the invalid data without changing state. |
POISON | 4 | Allow the bad data to poison the result. This is only valid for use with NaN. |
Name | Number | Description |
MATCH_CASE | 0 | |
IGNORE_CASE | 1 |
Name | Number | Description |
SUM | 0 | |
ABS_SUM | 1 | |
GROUP | 2 | |
AVG | 3 | |
COUNT | 4 | |
FIRST | 5 | |
LAST | 6 | |
MIN | 7 | |
MAX | 8 | |
MEDIAN | 9 | |
PERCENTILE | 10 | |
STD | 11 | |
VAR | 12 | |
WEIGHTED_AVG | 13 |
Name | Number | Description |
LESS_THAN | 0 | |
LESS_THAN_OR_EQUAL | 1 | |
GREATER_THAN | 2 | |
GREATER_THAN_OR_EQUAL | 3 | |
EQUALS | 4 | |
NOT_EQUALS | 5 |
Name | Number | Description |
REGULAR | 0 | |
INVERTED | 1 |
Name | Number | Description |
ROUNDING_MODE_NOT_SPECIFIED | 0 | |
UP | 1 | |
DOWN | 2 | |
CEILING | 3 | |
FLOOR | 4 | |
HALF_UP | 5 | |
HALF_DOWN | 6 | |
HALF_EVEN | 7 | |
UNNECESSARY | 8 |
Name | Number | Description |
NULL_VALUE | 0 |
Name | Number | Description |
END_UNSPECIFIED | 0 | |
GREATER_THAN | 1 | |
GREATER_THAN_OR_EQUAL | 2 | |
GREATER_THAN_OR_EQUAL_ALLOW_FOLLOWING | 3 |
Name | Number | Description |
START_UNSPECIFIED | 0 | |
LESS_THAN | 1 | |
LESS_THAN_OR_EQUAL | 2 | |
LESS_THAN_OR_EQUAL_ALLOW_PRECEDING | 3 |
Name | Number | Description |
UNKNOWN | 0 | |
DESCENDING | -1 | |
ASCENDING | 1 | |
REVERSE | 2 |
Directives for how to handle {@code null} and {@code NaN} values
Name | Number | Description |
NULL_BEHAVIOR_NOT_SPECIFIED | 0 | When not specified will use the server default. |
NULL_DOMINATES | 1 | In the case of Current - null, the null dominates so Column[i] - null = null |
VALUE_DOMINATES | 2 | In the case of Current - null, the current value dominates so Column[i] - null = Column[i] |
ZERO_DOMINATES | 3 | In the case of Current - null, return zero so Column[i] - null = 0 |
Method Name | Request Type | Response Type | Description |
GetExportedTableCreationResponse | Ticket | ExportedTableCreationResponse | Request an ETCR for this ticket. Ticket must reference a Table. |
FetchTable | FetchTableRequest | ExportedTableCreationResponse | Fetches a Table from an existing source ticket and exports it to the local session result ticket. |
ApplyPreviewColumns | ApplyPreviewColumnsRequest | ExportedTableCreationResponse | Create a table that has preview columns applied to an existing source table. |
EmptyTable | EmptyTableRequest | ExportedTableCreationResponse | Create an empty table with the given column names and types. |
TimeTable | TimeTableRequest | ExportedTableCreationResponse | Create a time table with the given start time and period. |
DropColumns | DropColumnsRequest | ExportedTableCreationResponse | Drop columns from the parent table. |
Update | SelectOrUpdateRequest | ExportedTableCreationResponse | Add columns to the given table using the given column specifications and the update table operation. |
LazyUpdate | SelectOrUpdateRequest | ExportedTableCreationResponse | Add columns to the given table using the given column specifications and the lazyUpdate table operation. |
View | SelectOrUpdateRequest | ExportedTableCreationResponse | Add columns to the given table using the given column specifications and the view table operation. |
UpdateView | SelectOrUpdateRequest | ExportedTableCreationResponse | Add columns to the given table using the given column specifications and the updateView table operation. |
Select | SelectOrUpdateRequest | ExportedTableCreationResponse | Select the given columns from the given table. |
UpdateBy | UpdateByRequest | ExportedTableCreationResponse | Returns the result of an updateBy table operation. |
SelectDistinct | SelectDistinctRequest | ExportedTableCreationResponse | Returns a new table definition with the unique tuples of the specified columns |
Filter | FilterTableRequest | ExportedTableCreationResponse | Filter parent table with structured filters. |
UnstructuredFilter | UnstructuredFilterTableRequest | ExportedTableCreationResponse | Filter parent table with unstructured filters. |
Sort | SortTableRequest | ExportedTableCreationResponse | Sort parent table via the provide sort descriptors. |
Head | HeadOrTailRequest | ExportedTableCreationResponse | Extract rows from the head of the parent table. |
Tail | HeadOrTailRequest | ExportedTableCreationResponse | Extract rows from the tail of the parent table. |
HeadBy | HeadOrTailByRequest | ExportedTableCreationResponse | Run the headBy table operation for the given group by columns on the given table. |
TailBy | HeadOrTailByRequest | ExportedTableCreationResponse | Run the tailBy operation for the given group by columns on the given table. |
Ungroup | UngroupRequest | ExportedTableCreationResponse | Ungroup the given columns (all columns will be ungrouped if columnsToUngroup is empty or unspecified). |
MergeTables | MergeTablesRequest | ExportedTableCreationResponse | Create a merged table from the given input tables. If a key column is provided (not null), a sorted merged will be performed using that column. |
CrossJoinTables | CrossJoinTablesRequest | ExportedTableCreationResponse | Returns the result of a cross join operation. Also known as the cartesian product. |
NaturalJoinTables | NaturalJoinTablesRequest | ExportedTableCreationResponse | Returns the result of a natural join operation. |
ExactJoinTables | ExactJoinTablesRequest | ExportedTableCreationResponse | Returns the result of an exact join operation. |
LeftJoinTables | LeftJoinTablesRequest | ExportedTableCreationResponse | Returns the result of a left join operation. |
AsOfJoinTables | AsOfJoinTablesRequest | ExportedTableCreationResponse | Returns the result of an as of join operation. Deprecated: Please use AjTables or RajTables. |
AjTables | AjRajTablesRequest | ExportedTableCreationResponse | Returns the result of an aj operation. |
RajTables | AjRajTablesRequest | ExportedTableCreationResponse | Returns the result of an raj operation. |
MultiJoinTables | MultiJoinTablesRequest | ExportedTableCreationResponse | Returns the result of a multi-join operation. |
RangeJoinTables | RangeJoinTablesRequest | ExportedTableCreationResponse | Returns the result of a range join operation. |
ComboAggregate | ComboAggregateRequest | ExportedTableCreationResponse | Returns the result of an aggregate table operation. Deprecated: Please use AggregateAll or Aggregate instead |
AggregateAll | AggregateAllRequest | ExportedTableCreationResponse | Aggregates all non-grouping columns against a single aggregation specification. |
Aggregate | AggregateRequest | ExportedTableCreationResponse | Produce an aggregated result by grouping the source_id table according to the group_by_columns and applying aggregations to each resulting group of rows. The result table will have one row per group, ordered by the encounter order within the source_id table, thereby ensuring that the row key for a given group never changes. |
Snapshot | SnapshotTableRequest | ExportedTableCreationResponse | Takes a single snapshot of the source_id table. |
SnapshotWhen | SnapshotWhenTableRequest | ExportedTableCreationResponse | Snapshot base_id, triggered by trigger_id, and export the resulting new table. The trigger_id table's change events cause a new snapshot to be taken. The result table includes a "snapshot key" which is a subset (possibly all) of the base_id table's columns. The remaining columns in the result table come from base_id table, the table being snapshotted. |
Flatten | FlattenRequest | ExportedTableCreationResponse | Returns a new table with a flattened row set. |
RunChartDownsample | RunChartDownsampleRequest | ExportedTableCreationResponse | Downsamples a table assume its contents will be rendered in a run chart, with each subsequent row holding a later X value (i.e., sorted on that column). Multiple Y columns can be specified, as can a range of values for the X column to support zooming in. |
CreateInputTable | CreateInputTableRequest | ExportedTableCreationResponse | Creates a new Table based on the provided configuration. This can be used as a regular table from the other methods in this interface, or can be interacted with via the InputTableService to modify its contents. |
WhereIn | WhereInRequest | ExportedTableCreationResponse | Filters the left table based on the set of values in the right table. Note that when the right table ticks, all of the rows in the left table are going to be re-evaluated, thus the intention is that the right table is fairly slow moving compared with the left table. |
Batch | BatchTableRequest | ExportedTableCreationResponse stream | Batch a series of requests and send them all at once. This enables the user to create intermediate tables without requiring them to be exported and managed by the client. The server will automatically release any tables when they are no longer depended upon. |
ExportedTableUpdates | ExportedTableUpdatesRequest | ExportedTableUpdateMessage stream | Establish a stream of table updates for cheap notifications of table size updates. New streams will flush updates for all existing table exports. An export id of zero will be sent to indicate all exports have sent their refresh update. Table updates may be intermingled with initial refresh updates after their initial update had been sent. |
SeekRow | SeekRowRequest | SeekRowResponse | Seek a row number within a table. |
MetaTable | MetaTableRequest | ExportedTableCreationResponse | Returns the meta table of a table. |
ComputeColumnStatistics | ColumnStatisticsRequest | ExportedTableCreationResponse | Returns a new table representing statistics about a single column of the provided table. This result table will be static - use Aggregation() instead for updating results. Presently, the primary use case for this is the Deephaven Web UI. |
Slice | SliceRequest | ExportedTableCreationResponse | Returns a new table representing a sliced subset of the original table. The start position is inclusive and the end position is exclusive. If a negative value is given, then the position is counted from the end of the table. |
Method Name | Option |
AsOfJoinTables | true |
ComboAggregate | true |
Copyright (c) 2016-2022 Deephaven Data Labs and Patent Pending
An opaque identifier that the service can use to retrieve a particular
portion of a stream.
Field | Type | Label | Description |
ticket | bytes |
|
Field | Type | Label | Description |
ticket | Ticket |
|
|
type | string | The type. An empty string means that it is not known, not that the server chose to not set it. |
.proto Type | Notes | C++ | Java | Python | Go | C# | PHP | Ruby |
double | double | double | float | float64 | double | float | Float | |
float | float | float | float | float32 | float | float | Float | |
int32 | Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) |
int64 | Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead. | int64 | long | int/long | int64 | long | integer/string | Bignum |
uint32 | Uses variable-length encoding. | uint32 | int | int/long | uint32 | uint | integer | Bignum or Fixnum (as required) |
uint64 | Uses variable-length encoding. | uint64 | long | int/long | uint64 | ulong | integer/string | Bignum or Fixnum (as required) |
sint32 | Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) |
sint64 | Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s. | int64 | long | int/long | int64 | long | integer/string | Bignum |
fixed32 | Always four bytes. More efficient than uint32 if values are often greater than 2^28. | uint32 | int | int | uint32 | uint | integer | Bignum or Fixnum (as required) |
fixed64 | Always eight bytes. More efficient than uint64 if values are often greater than 2^56. | uint64 | long | int/long | uint64 | ulong | integer/string | Bignum |
sfixed32 | Always four bytes. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) |
sfixed64 | Always eight bytes. | int64 | long | int/long | int64 | long | integer/string | Bignum |
bool | bool | boolean | boolean | bool | bool | boolean | TrueClass/FalseClass | |
string | A string must always contain UTF-8 encoded or 7-bit ASCII text. | string | String | str/unicode | string | string | string | String (UTF-8) |
bytes | May contain any arbitrary sequence of bytes. | string | ByteString | str | []byte | ByteString | string | String (ASCII-8BIT) |