Skip to main content
Version: Java (Groovy)

Deephaven Community Tutorial

1. Ingest static data

Deephaven empowers you to work with both batch and streaming data using the same methods.

It supports ingesting data from CSVs and other delimited files, and reading Parquet files at rest. [Soon you’ll be able to read XML, access SQL databases via ODBC, and access Arrow buffers locally and via Flight.]

CSV ingestion is described in detail in our guide, How to import CSV files. The basic syntax is:

from deephaven import read_csv

named_table = read_csv("")

Here is an introductory program that ingests weather data from a CSV-url, then calculates average, low, and high temperatures by year.

Most of the script is simply to whet your appetite.

To read data from a local or networked file system, simply specify the file path. The file path below assumes you downloaded pre-built Docker images that include Deephaven’s example data, as described in our Quick start.

This script accesses a million-row CSV of crypto trades from 09/22/2021.

This importer provides a variety of capabilities related to .txt, .psv, and other delimited files. The related Javadocs describe these usage patterns.

The table widget now in view is designed to be highly interactive:

  • Touch the table and filter via Ctrl + F (Windows) or + F (Mac).
  • Touch the funnel icon to create sophisticated filters or use auto-filter UI features.
  • Hover on headers to see data types.
  • Click headers to access more options, like adding or changing sorts.
  • Click the Table Options menu at right to plot from the UI, create and manage columns, download CSVs.

In addition to CSV, Deephaven supports reading Parquet files. Parquet files can be accessed via random access and therefore need not be read completely into memory. See our documentation about reading both single, flat files and multiple, partitioned Parquet files.

2. Ingest real-time streams

Providing you the ability to work with dynamic, updating, and real-time data is Deephaven’s superpower.

Users connect Kafka and other event streams, integrate enterprise and vendor data-source APIs and feeds, receive JSON from devices, and [soon] integrate with Change Data Capture (CDC) exhaust from RDBMSs.

Deephaven has a rich Kafka integration, supporting AVRO, JSON, dictionaries, and dynamics for historical, stream, and append tables. Our concept piece guide, Kafka in Deephaven, illuminates the ease and value.

Though there is much sophistication available, the basic syntax for Kafka integration is:

from deephaven import kafka_consumer as ck

result = ck.consume({'bootstrap.servers': 'server_producer:port'}, 'kafka.topic')

The code above is generic. All the available options to properly explore your local configuration to be able to hook up a Kafka feed are beyond the scope of this tutorial.

The following script creates a fake appending table of hypothetical crypto trades in a few instruments on a few exchanges. The data is fake, so you’ll notice trade event intervals are more uniform and sizes a bit atypical of the market.

New records are added every 25 milliseconds, back-populated to 30 minutes prior to the moment you run the script.

It’s fun to watch new data hit the screen, so let’s reverse the table.


This can also be done without a query. Click on a column header in the UI and choose Reverse Table.

Now that you have a few tables, the next section will introduce adding new columns to them and merging.

3. Create columns and merge tables

Let's examine the data a bit programmatically. Use countBy to see the row-count of the tables, respectively.

Table operations, methods, and other capabilities of the Deephaven table API are used identically for updating (streaming) tables and static ones!

This simple example illustrates this superpower:

You can eyeball the respective row counts easily by merging the tables. In the future, if you need to merge, then sort a table, using mergeSorted is recommended, as it is more efficient.

Explore the schema and other metadata using getMeta.

Merging and sorting the metadata tables will highlight some differences in the respective schema.

You can cast the three columns that are different in the crypto_streaming table using update, which is one of the five selection and projection operations available to you, as described in our guide, How to select, view, and update data.

Now that the schemas are identical across the three tables, let’s create one table of crypto data that has both updating and static data - the latter assembled using headPct and tailPct. The last two lines remove the legacy static tables.

In the next section, you’ll learn about adding new columns to support calculations and logic, and doing aggregations.

4. Manipulate and aggregate data

It's likely you've figured out a few of Deephaven’s fundamentals:

  • You name tables and operate on them. Everything in Deephaven is a table. Streams are updating tables. Batches are static ones. You don't have to track this.
  • You apply methods to these tables and can be blind about whether the data is updating or not.
  • You can refer to other named tables, and data simply flows from tables to its dependents. You may know this as an acyclic graph. (See our concept guide on the table update model if you're interested in what's under-the-hood.)
  • There is no optimizer to wrestle with. You’ll appreciate this once you tackle complex use cases or need to bring your Python, Java, or wrapped C++ code to the data.

Aggregations are an important use case for streaming data. (And static, too.) Doing a single, dedicated aggregation, like the sumBy below, follows a pattern similar to the countBy you did earlier.

If your use case is well served by adding columns in formulaic, on-demand fashion (instead of writing results to memory), use updateView. In this case, you’ll calculate the value of each trade and mod-10 the Id field.


You can also add columns with Manage Custom Columns in the Table Options menu in the web UI.

Binning data is fundamental and is intended to be easy via upperBin and lowerBin. This is heavily used in profiling and sampling data.

The query below reuses the same table name (crypto_main). That’s just fine.

View distinct values using selectDistinct.


You can also accomplish this with Select Distinct Values in the Table Options menu in the web UI.

Performing multiple aggregations simultaneously may prove logical and helpful to performance.

Let's define an aggregation function to be used later. The function will return an aggregation result based on the table and aggregation-keys you pass in.

Below, you equip the aggregate_crypto with different numbers and versions of keys. The last table has some extra polish to make the resulting table more valuable to the eye.

5. Filter, join, and as-of-join

Deephaven filtering is accomplished by applying where operations. The engine supports a large array of match, conditional, and combination filters.

These four scripts are simple examples.

Use whereIn to filter one table based on the contents of another "filter table". If the filter table updates, the filter applied to the other changes automatically.

In the third line below, you’ll filter the table crypto_main based on the Instrument values of the table row_1.

If you prefer, you can set variables using records from tables.

These lines, in combination, will print the record in the first index position (2nd row) of the Instrumentcolumn in the agg_by_instrument table to your console.

That variable can be used for filtering.

Deephaven joins are first class, supporting joining real-time, updating tables with each other (and with static tables) without any need for windowing.

Our guide, Choose a join method, offers guidance on how to choose the best method for your use case.

Generally, joins fall into one of two categories:

The syntax is generally as follows:

result1 = left_table.joinMethod(right_table, "Key", "Column_from_right_table")

Or, with multiple keys and columns:

result1 = left_table.joinMethod(right_table, "Key1, Key2, KeyN", "Column1_from_right, Column2_from_right, ColumnN_from_right")

You can rename columns as you join tables as well.

Use naturalJoin when you expect no more than one match in the right table per key, and are happy to receive null records as part of the join process.

Though Deephaven excels with relational joins, its ordering capabilities make it an excellent time series database.

Time series joins, or “as-of joins”, take a timestamp key from the left table and do a binary search in the right table (respecting other join keys) seeking an exact timestamp-nanosecond match. If no match exists, the timestamp just prior to the join-timestamp establishes the match target.

It is important to note:

  • The right table needs to be sorted.
  • Numerical fields other than date-times can also be used for the final key in as-of joins.
  • Reverse-as-of join is similar, but uses the record just after the target timestamp if no exact match is found.
  • One can syntactically use < or > (instead of =) in the query to eliminate the exact match as the best candidate.

The introduction of Exchange as a join-key in front of Timestamp in the script below directs the engine to do the as-of-join after first doing an exact match on Exchange between the left and right tables.

People often use aj to join records that are shifted by a time phase.

6. Plot data via query or the UI

Deephaven has a rich plotting API that support updating, real-time plots. It can be called programmatically or via JS integrations in the web UI. It integrates with the open-source plotly library. The suite of plots will continue to grow, with the Deephaven community setting the priorities.

Try these basic examples:

You can also make simple plots like these using the Chart Builder in the UI. Open the Table Options menu at the table's right. After choosing a chart type, you can configure the relevant options. Below, we create the same simple_line_plot as in our query above.

It's easy to export your data out of Deephaven to popular open formats.

To export our final, joined table to a CSV file, simply use the write_csv method with table name and the location to which you want to save the file.

If the table is dynamically updating, Deephaven will automatically snapshot the data before writing it to the file.

Similarly, for Parquet:

In this case, we use writeTable, and the optional argument GZIP. This tells Deephaven to use GZIP compression on the data.

To create a static Pandas DataFrame, use the dataFrameToTable method.