Skip to main content

Deephaven at a glance

· 2 min read
DALL·E prompt: A 3d render of data server in the shape of a ford car engine, realistic, artstation, cg
Pete Goddard

It's natural to frame new information in the context of what you already know and understand. The same is true when determining where new vendors and technology/solution providers fit into established marketplaces. Deephaven enters an arena defined by Kafka, Spark, Influx, Redshift, BigQuery, Snowflake, Postgres, and dozens of other players.

Below, we break down Deephaven’s differentiators and value propositions, and place them in the context of the use cases they serve, so that you can leverage Deephaven Core for your next big data project.

Just the facts

What is Deephaven?

In short, a data query engine. So what does that mean? Well, beyond being a query engine, Deephaven's integrations, experiences, and APIs provide a turnkey framework that enable people to be immediately productive, on really any data project.

Why use Deephaven?

If your work requires real-time and other dynamic data, time series and relational computations, and custom Python, Java, or C++ to be compiled alongside table operations, then Deephaven Core can help.

How can you use Deephaven?

Developers and data scientists apply Deephaven technology for big data use cases that drive analytics and applications. It can be employed for a wide range of purposes — from simple transformations through machine-learning.

Deephaven applications include connecting to a series of Kafka streams and connecting Deephaven to IoT devices, for edge data calculations. The list is endless

Where are the data sources?

Deephaven works directly with your data and accesses it where it lives. Deephaven delivers access to partitioned, columnar data sources (like Parquet and Arrow Flight), as well as modern event streams (like Kafka, RedPandas, Solace and Chronicle).

Additional data sources can be integrated to deliver data in-memory. Deephaven can connect to upstream applications directly and works with modern ML tools.

Who (or what) is under the hood?

To power such a dynamic query engine, Deephaven employs a number of software components, including a graph-based update model, unified abstraction, a high performance Java engine with native CPython, NumPy, and SciPy through a JPy bridge (that Deephaven helps maintain), array-oriented architecture, and the list goes on

Read a more in depth overview