Import CSV Files

This guide demonstrates how to generate a schema from a CSV data file, deploy it, and import a file from the command line. These commands assume a typical Deephaven installation and a sample file located at /data/sample.csv.

Generate a schema from a sample CSV data file

See the Schema inference page for a guide on generating a schema from a CSV file.

Deploy the schema

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig schema import --file /tmp/CSVExampleNamespace.CSVExampleTableName.schema

Import a single data file into the specified Intraday partition

sudo -u dbmerge /usr/illumon/latest/bin/iris_exec csv_import -- --namespace CSVExampleNamespace --tableName CSVExampleTableName --sourceFile /data/sample.csv -destinationPartition localhost/2018-09-26

CSV Import Query

Note

At this time, CSV Import Queries are only available in the Legacy Engine.

To create a CSV Import Query, click the +New button above the Query List in the Query Monitor and select the type Import - CSV.

img

  • Select a DB Server , choose an Engine (Core+ or Legacy), and enter the desired value for Memory (Heap) Usage (GB).
  • Options available in the Show Advanced Options section of the panel are typically not used when importing or merging data.
  • The Permissions tab presents a panel with the same options as all other configuration types, and gives the query owner the ability to authorize Admin and Viewer Groups for this query.
  • Clicking the Scheduling tab presents a panel with the same scheduling options as all other configuration types.
  • Clicking the CSV Import Settings tab presents a panel with the options pertaining to importing a CSV file:

img

img

CSV Import Settings

  • Namespace: This is the namespace into which you want to import the file.
  • Table: This is the table into which you want to import the data.
  • Import Source: This is the import source section of the associated schema file that specifies how source data columns will be set as Deephaven columns.
    • Strict Import will fail if a file being imported has missing column values (nulls), unless those columns allow or replace the null values using a default attribute in the ImportColumn definition.
  • Output Mode: This determines what happens if data is found in the fully-specified partition for the data. The fully-specified partition includes both the internal partition (unique for the import job) and the column partition (usually the date).
    • Safe - The import job will fail if existing data is found in the fully-specified partition.
    • Append - Data will be appended to it if existing data is found in the fully-specified partition.
    • Replace - Data will be replaced if existing data is found in the fully-specified partition. This does not replace all data for a column partition value, just the data in the fully-specified partition.
  • Single/Multi Partition: This controls the import mode. In single-partition, all of the data is imported into a single Intraday partition. In multi-partition mode, you must specify a column in the source data that will control to which partition each row is imported.
  • Single-partition configuration:
    • Partition Formula: This is the formula needed to partition the CSV being imported. If a specific partition value is used it will need to be surrounded by quotes. For example: currentDateNy() "2017-01-01"
    • Partition Substitution: This is a token used to substitute the determined column partition value in the source directory, source file, or source glob, to allow the dynamic determination of these fields. For example, if the partition substitution is "PARTITION_SUB", and the source directory includes "PARTITION_SUB" in its value, that PARTITION_SUB will be replaced with the partition value determined from the partition formula.
    • Substitution Date Format: This is the date format that will be used when a Partition Substitution is used. The standard Deephaven date partition format is yyyy-MM-dd (e.g., 2018-05-30), but this allows substitution in another format. For example, if the filename includes the date in yyyyddMM format instead (e.g., 20183005), that could be used in the Date Substitution Format field. All the patterns from the Java DateTimeFormatter class are allowed.
  • Multi-partition configuration:
    • Import Partition Column: This is the name of the database column used to choose the target partition for each row (typically "Date"). There must be a corresponding Import Column present in the schema, which will indicate how to get this value from the source data.
  • File Format: This is the format of the data in the CSV file being imported. Options include DEFAULT, TRIM, EXCEL, TDF, MYSQL, RFC4180, and BPIPE*.
  • Delimiter: This can be used to specify a custom delimiter character if something other than a comma is used in the file.
  • Skip Lines: This is the number of lines to skip before beginning parse (before header line, if any).
  • Skip Footer Lines: This is the number of footer lines to skip at the end of the file.
  • Source Directory: This is the path to where the CSV file is stored on the server on which the query will run.
  • Source File: This is the name of the CSV file to import.
  • Source Glob: This is an expression used to match multiple CSV file names.
  • Constant Value: A String of data to make available as a pseudo-column to fields using the CONSTANT sourceType.

* Note: BPIPE is the format used for Bloomberg's Data License product.

Import from Command Line

CSV imports can be performed directly from the command line, using the iris_exec tool.

Command Reference

iris_exec csv_import <launch args> -- <csv import args>

CSV Import Arguments

The CSV importer takes the following arguments:

ArgumentDescription
  • -dd or --destinationDirectory <path>
  • -dp or --destinationPartition <internal partition name / partitioning value> | <internal partition name>
  • -pcor--intradayPartition <partition column name>
Either a destination directory, specific partition, or internal partition plus a partition column must be provided. A directory can be used to write a new set of table files to a specific location on disk, where they can later be read with TableTools. A destination partition is used to write to intraday locations for existing tables. The internal partition value is used to separate data on disk; it does not need to be unique for a table. The name of the import server is a common value for this. The partitioning value is a string data value used to populate the partitioning column in the table during the import. This value must be unique within the table. In summary, there are three ways to specify destination table partition(s):
  1. Destination directory (e.g., -dd /db/Intraday/<namespace>/<table>/localhost/<date>/<table/)
  2. Internal partition and destination partition (e.g., -dp localhost/2018-01-01)
  3. Internal partition and partition column - for multi-partition import (e.g., -dp localhost -pc Date)
-ns or --namespace <namespace>(Required) Namespace in which to find the target table.
-tn or --tableName <name>(Required) Name of the target table.
-fd or --delimiter <delimiter character>Field Delimiter (Optional). Allows specification of a character other than the file format default as the field delimiter. If delimiter is specified, fileFormat is ignored. This must be a single character.
-ff or --fileFormat <format name>(Optional) The Apache Commons CSV parser is used to parse the file itself. Five common formats are supported:
  • DEFAULT – default format if none is specified; comma-separated field values, newline row terminators, double-quotes around field values that contain embedded commas, newline characters, or double-quotes.
  • TRIM - Similar to DEFAULT, but will trim all whitespace around values.
  • EXCEL – Microsoft Excel CSV format.
  • MYSQL – MySQL CSV format.
  • RFC4180 – IETF RFC 4180 MIME text/csv format.
  • TDF – Tab-delimited format.
-om or --outputMode <import behavior>(Optional):
  • SAFE (default)- SAFE checks whether the target table already contains data matching the provided partitioning value; if it does, the import is aborted.
  • When developing an import process, REPLACE should be used, because failed import attempts will often write some data to the table, causing the next attempt with SAFE to abort.
  • APPEND should normally be used only when you are running multiple imports to load a set of data to the same table at one time, possibly from multiple different sources, and the resultant data needs to be kept together as part of one logical partition.
-rc or --relaxedChecking <TRUE or FALSE>(Optional) Defaults to FALSE. If TRUE, will allow target columns that are missing from the source CSV to remain null, and will allow import to continue when data conversions fail. In most cases, this should be set to TRUE only when developing the import process for a new data source.
  • -sd or --sourceDirectory <path>
  • -sf or --sourceFile <exact file name>
  • -sg or --sourceGlob <file name pattern>
sourceDirectory, sourceFile, and sourceGlob are all optional. If none of these are provided, the system will attempt to do a multi-file import. Otherwise, sourceDirectory will be used in conjunction with sourceFile or sourceGlob. If sourceDirectory is not provided, but sourceFile is, then sourceFile will be used as a fully qualified file name. If sourceDirectory is not provided, but sourceGlob is, then sourceDirectory will default to the configured log file directory from the prop file being used.
-sn or --sourceName <ImportSource name>Specific ImportSource to use. If not specified, the importer will use the first ImportSource block that it finds that matches the type of the import (CSV/XML/JSON/JDBC).
-trSimilar to the TRIM file format, but adds leading/trailing whitespace trimming to any format. So, for a comma-delimited file with extra whitespace, -ff TRIM would be sufficient, but for a file using something other than a comma as its delimiter, the -tr option would be used in addition to -ff or -fd.
-cv or --constantColumnValue <constant column value>A literal value to use for the import column with sourceType="CONSTANT", if the destination schema requires it.

One special case when importing CSV data is columns in the target table with a type of Boolean.

  • By default, the CSV importer will attempt to interpret string data from the source table with 0, F, f, or any case of false, being written as a Boolean false.
  • Similarly, 1, T, t, or any case of true would be a Boolean true.
  • Any other value would result in a conversion failure that may be continuable, if, for instance, a default is set for the column.
  • To convert from other representations of true/false, for example from foreign language strings, a formula or transform would be needed, along with a sourceType="String" to ensure the reading of the data from the CSV field is handled correctly.