Read Parquet files into Deephaven tables

Deephaven integrates seamlessly with Parquet via the Parquet Groovy module, making it easy to read Parquet files directly into Deephaven tables. This document covers reading data into tables from single Parquet files, flat Parquet directories, and partitioned key-value Parquet directories. This document also covers reading Parquet files from S3 into Deephaven tables, a common use case.

Note

Much of this document covers reading Parquet files from S3. For the best performance, the Deephaven instance should be running in the same AWS region as the S3 bucket. Additional performance improvements can be made by using directory buckets to localize all data to a single AWS sub-region, and running the Deephaven instance in that same sub-region. See this article for more information on S3 directory buckets.

Read a single Parquet file

Reading a single Parquet file involves loading data from one specific file into a table. This is straightforward and efficient when dealing with a relatively small dataset or when the data is consolidated into one file.

From local storage

Read single Parquet files into Deephaven tables with ParquetTools.readTable. The function takes a single required argument source, which gives the full file path of the Parquet file.

From S3

Deephaven provides some tooling around reading from S3 with the io.deephaven.extensions.s3 Groovy module. This module contains the S3Instructions class, which is used to establish communication with the S3 instance. Learn more about this class in the Parquet instructions document.

Use ParquetTools.readTable to read a single Parquet file from S3, where the source argument is provided as the endpoint to the Parquet file on the S3 instance. Supply an instance of the S3Instructions class to the ParquetInstructions.Builder to specify the details of the connection to the S3 instance. Learn more about this class in the Parquet instructions document.

Partitioned Parquet directories

Deephaven supports reading partitioned Parquet directories. A partitioned Parquet directory organizes data into subdirectories based on one or more partition columns. This structure allows for more efficient data querying by pruning irrelevant partitions, leading to faster read times than a single Parquet file. Parquet data can be read into Deephaven tables from a flat partitioned directory or a key-value partitioned directory. Deephaven can also use Parquet metadata files, which boosts performance significantly.

When a partitioned Parquet directory is read into a Deephaven table, Deephaven represents the ingested data as a partitioned table. Deephaven's partitioned tables are efficient representations of partitioned datasets and provide many useful methods for working with such data. See the guide on partitioned tables for more information.

Read a key-value partitioned Parquet directory

Key-value partitioned Parquet directories extend partitioning by organizing data based on key-value pairs in the directory structure. This allows for highly granular and flexible data access patterns, providing efficient querying for complex datasets. The downside is the added complexity in managing and maintaining the key-value pairs, which can be more intricate than other partitioning methods.

From local storage

Use ParquetTools.readTable to read a key-value partitioned Parquet directory into a Deephaven partitioned table. The directory structure may be automatically inferred by ParquetTools.readTable. Alternatively, provide the appropriate directory structure to the readInstructions argument using ParquetFileLayout.valueOf("KV_PARTITIONED"). Providing this argument will boost performance, as no computation is required to infer the directory layout.

If the key-value partitioned Parquet directory contains _common_metadata and _metadata files, utilize them by setting the readInstructions argument to ParquetFileLayout.valueOf("METADATA_PARTITIONED"). This is the most performant option if the metadata files are available.

From S3

Use ParquetTools.readTable to read a key-value partitioned Parquet directory from S3. Supply the setSpecialInstructions method with an instance of the S3Instructions class, and supply the setFileLayout method with ParquetFileLayout.KV_PARTITIONED for maximum performance.

S3-hosted key-value partitioned Parquet datasets may also have _common_metadata and _metadata files. Utilize them by setting the setFileLayout argument to ParquetFileLayout.valueOf("METADATA_PARTITIONED").

Read a flat partitioned Parquet directory

A flat partitioned Parquet directory stores data without nested subdirectories. Each file contains partition information within its filename or as metadata. This approach simplifies directory management compared to hierarchical partitioning but can lead to larger directory listings, which might affect performance with many partitions.

From local storage

Read local flat partitioned Parquet directories into Deephaven tables with ParquetTools.readTable. Set the readInstructions argument to ParquetFileLayout.valueOf("FLAT_PARTITIONED") for maximum performance.

From S3

Use ParquetTools.readTable to read a flat partitioned Parquet directory from S3. Supply the special_instructions argument with an instance of the S3Instructions class, and set the file_layout argument to ParquetFileLayout.FLAT_PARTITIONED for maximum performance.

If the S3-hosted flat partitioned Parquet dataset has _common_metadata and _metadata files, utilize them by supplying the setFileLayout method with ParquetFileLayout.METADATA_PARTITIONED.