How to write and read single Parquet files
This guide will show you how to write and read data to/from a Deephaven table and from/to a single Parquet file using the write
and read
methods.
The basic syntax follows:
write(source, "/data/output.parquet")
write(source, "/data/output_GZIP.parquet", compression_codec_name="GZIP")
read("/data/output.parquet")
read("/data/output_GZIP.parquet")
Write a table to a Parquet file
Let's create a table to write that contains student names, test scores, and GPAs.
from deephaven import new_table
from deephaven.column import int_col, double_col, string_col
grades = new_table([
string_col("Name", ["Ashley", "Jeff", "Rita", "Zach"]),
int_col("Test1", [92, 78, 87, 74]),
int_col("Test2", [94, 88, 81, 70]),
int_col("Average", [93, 83, 84, 72]),
double_col("GPA", [3.9, 2.9, 3.0, 1.8])
])
- grades
Now, use the write
method to export the table to a Parquet file. write
takes the following arguments:
- The table to be written. In this case,
grades
. - The Parquet file to write to. In this case,
/data/grades_GZIP.parquet
. - (Optional)
parquetInstructions
for writing files using compression codecs. Accepted values are:SNAPPY
: Aims for high speed and a reasonable amount of compression. Based on Google's Snappy compression format. IfParquetInstructions
is not specified, it defaults toSNAPPY
.UNCOMPRESSED
: The output will not be compressed.LZ4_RAW
: A codec based on the LZ4 block format. Should always be used instead ofLZ4
.LZ4
: Deprecated Compression codec loosely based on the LZ4 compression algorithm, but with an additional undocumented framing scheme. The framing is part of the original Hadoop compression library and was historically copied first in parquet-mr, then emulated with mixed results by parquet-cpp. Note thatLZ4
is deprecated; useLZ4_RAW
instead.LZO
: Compression codec based on or interoperable with the LZO compression library.GZIP
: Compression codec based on the GZIP format (not the closely-related "zlib" or "deflate" formats) defined by RFC 1952.ZSTD
: Compression codec with the highest compression ratio based on the Zstandard format defined by RFC 8478.
In this guide, we write data to locations relative to the base of its Docker container. See Docker data volumes to learn more about the relation between locations in the container and the local file system.
from deephaven.parquet import write
write(grades, "/data/grades_GZIP.parquet", compression_codec_name="GZIP")
Read a Parquet file into a table
Now, use the read
method to import the Parquet file as a table. read
takes the following arguments:
- The Parquet file to read. In this case,
/data/grades_GZIP.parquet
. - (Optional )
parquetInstructions
for codecs when the file type cannot be successfully infered. Accepted values are:SNAPPY
: Aims for high speed, and a reasonable amount of compression. Based on Google's Snappy compression format. IfParquetInstructions
is not specified, it defaults toSNAPPY
.UNCOMPRESSED
: The output will not be compressed.LZ4_RAW
: A codec based on the LZ4 block format. Should always be used instead ofLZ4
.LZ4
: Deprecated Compression codec loosely based on the LZ4 compression algorithm, but with an additional undocumented framing scheme. The framing is part of the original Hadoop compression library and was historically copied first in parquet-mr, then emulated with mixed results by parquet-cpp. Note thatLZ4
is deprecated; useLZ4_RAW
instead.LZO
: Compression codec based on or interoperable with the LZO compression library.GZIP
: Compression codec based on the GZIP format (not the closely-related "zlib" or "deflate" formats) defined by RFC 1952.ZSTD
: Compression codec with the highest compression ratio based on the Zstandard format defined by RFC 8478.LEGACY
: Load any binary fields as strings. Helpful to load files written in older versions of Parquet that lacked a distinction between binary and string.
For more information on the file path, see Docker data volumes.
from deephaven.parquet import read
result = read("/data/grades_GZIP.parquet")
- result
Read large Parquet files
When we load a Parquet file into a table, we do not load the whole file into RAM. This means that files much larger than the available RAM can be loaded as tables.