Skip to main content

Parquet Cheat Sheet

Optional instructions for customizations while writing. Valid values are:

  • LZ4: Compression codec loosely based on the LZ4 compression algorithm, but with an additional undocumented framing scheme. The framing is part of the original Hadoop compression library and was historically copied first in parquet-mr, then emulated with mixed results by parquet-cpp.
  • LZO: Compression codec based on or interoperable with the LZO compression library.
  • GZIP: Compression codec based on the GZIP format (not the closely-related "zlib" or "deflate" formats) defined by RFC 1952.
  • ZSTD: Compression codec with the highest compression ratio based on the Zstandard format defined by RFC 8478.

Reading instructions have all the above plus LEGACY avaialable:

  • LEGACY: Load any binary fields as strings. Helpful to load files written in older versions of Parquet that lacked a distinction between binary and string.
# Create a table
from deephaven.TableTools import newTable, intCol, stringCol
from deephaven.ParquetTools import writeTable

source = newTable(
stringCol("X", "A", "B", "B", "C", "B", "A", "B", "B", "C"),
intCol("Y",2, 4, 2, 1, 2, 3, 4, 2, 3),
intCol("Z", 55, 76, 20, 4, 230, 50, 73, 137, 214),

# Write to a local file
writeTable(source, "/data/output.parquet")

# Write to a local file with compression
writeTable(source, "/data/output_GZIP.parquet", "GZIP")

# Read from a local file
source = readTable("/data/output.parquet")

# Read from a local compressed file
source = readTable("/data/output_GZIP.parquet", "GZIP")

# Read en entire directory or parquet files
# Only files with a `.parquet` extension or `_common_metadata` and `_metadata` files should be located in these directories.
# All files ending with `.parquet` need the same schema.
source = readTable("/data/examples/Pems/parquet/pems")