What namespace do system tables created by a PQ (Persistent Query) get put in?
What namespace is a new PQ created in? Or, what namespace do system tables created by a PQ get put in?In short:
- PQ itself: No namespace (it's a query execution mechanism).
- Enterprise: You specify the namespace and table name when saving data (merge queries or user tables).
- Community: You specify the file path/location when exporting.
This question touches on some important concepts in Deephaven:
1. Persistent Queries and Namespaces
Persistent Queries do not have a namespace. However, when a PQ creates or modifies system table(s), they are stored according to the rules below.
2. Deephaven Enterprise - Table Storage
In Deephaven Enterprise, you control where tables are stored by specifying the namespace and table name when saving data:
For merge queries: Configure the output location in the merge settings by specifying:
Namespace
: The target namespace for merged data.Table
: The target table name.
For user tables: Use the Database APIs to save tables in the format Namespace.TableName
. You can choose between:
- System namespaces: Managed by administrators through structured processes.
- User namespaces: Managed directly by users with appropriate privileges.
Example of referencing a saved table:
dhconfig schemas delete UserNamespace.TableName --force
3. Deephaven Community - Table Storage Location
In Deephaven Community, you specify the location where tables are stored when using data export methods:
- Parquet files: Use
parquet.write
,parquet.write_partitioned
, orparquet.batch_write
with a specifiedpath
parameter (local filesystem or S3). - CSV files: Use
write_csv
with a specified destination path. - Other formats: Various export functions allow you to specify the output location.
Example:
from deephaven import parquet
# You specify exactly where the table gets written
parquet.write(table=my_table, path="/data/output/my_table.parquet")
# For S3 storage
from deephaven.experimental import s3
credentials = s3.Credentials.basic(
access_key_id="your_access_key", secret_access_key="your_secret_key"
)
parquet.write(
table=my_table,
path="s3://your-bucket/my_table.parquet",
special_instructions=s3.S3Instructions(
region_name="us-east-1",
credentials=credentials,
),
)