Class ApproximatePercentile

java.lang.Object
io.deephaven.engine.table.impl.by.ApproximatePercentile

public class ApproximatePercentile extends Object
Generate approximate percentile aggregations of a table.

The underlying data structure and algorithm used is a t-digest as described at https://github.com/tdunning/t-digest, which has a "compression" parameter that determines the size of the retained values. From the t-digest documentation, "100 is a common value for normal uses. 1000 is extremely large. The number of centroids retained will be a smallish (usually less than 10) multiple of this number."e;

All input columns are cast to doubles and the result columns are doubles.

The input table must be add only, if modifications or removals take place; then an UnsupportedOperationException is thrown. For tables with adds and removals you must use exact percentiles with Aggregation.AggPct(double, java.lang.String...).

You may compute either one approximate percentile or several approximate percentiles at once. For example, to compute the 95th percentile of all other columns, by the "Sym" column you would call:

 ApproximatePercentile.approximatePercentile(input, 0.95, "Sym")
 

If you need to compute several percentiles, it is more efficient to compute them simultaneously. For example, this example computes the 75th, 95th, and 99th percentiles of the "Latency" column using a builder pattern, and the 95th and 99th percentiles of the "Size" column by "Sym":

 final Table aggregated = input.aggBy(List.of(
         Aggregation.ApproxPct("Latency', PctOut(0.75, "L75"), PctOut(0.95, "L95"), PctOut(0.99, "L99")
         Aggregation.ApproxPct("Size', PctOut(0.95, "S95"), PctOut(0.99, "S99")));
 

When parallelizing a workload, you may want to divide it based on natural partitioning and then compute an overall percentile. In these cases, you should use the Aggregation.AggTDigest(java.lang.String...) aggregation to expose the internal t-digest structure as a column. If you then perform an array aggregation (TableOperations.groupBy()), you can call the accumulateDigests(io.deephaven.vector.ObjectVector<com.tdunning.math.stats.TDigest>) function to produce a single digest that represents all of the constituent digests. The amount of error introduced is related to the compression factor that you have selected for the digests. Once you have a combined digest object, you can call the quantile or other functions to extract the desired percentile.

  • Method Details

    • approximatePercentileBy

      public static Table approximatePercentileBy(Table input, double percentile)
      Compute the approximate percentiles for the table.
      Parameters:
      input - the input table
      percentile - the percentile to compute for each column
      Returns:
      a single row table with double columns representing the approximate percentile for each column of the input table
    • approximatePercentileBy

      public static Table approximatePercentileBy(Table input, double percentile, String... groupByColumns)
      Compute the approximate percentiles for the table.
      Parameters:
      input - the input table
      percentile - the percentile to compute for each column
      groupByColumns - the columns to group by
      Returns:
      a with the groupByColumns and double columns representing the approximate percentile for each remaining column of the input table
    • approximatePercentileBy

      public static Table approximatePercentileBy(Table input, double percentile, ColumnName... groupByColumns)
      Compute the approximate percentiles for the table.
      Parameters:
      input - the input table
      percentile - the percentile to compute for each column
      groupByColumns - the columns to group by
      Returns:
      a with the groupByColumns and double columns representing the approximate percentile for each remaining column of the input table
    • approximatePercentileBy

      public static Table approximatePercentileBy(Table input, double compression, double percentile, ColumnName... groupByColumns)
      Compute the approximate percentiles for the table.
      Parameters:
      input - the input table
      compression - the t-digest compression parameter
      percentile - the percentile to compute for each column
      groupByColumns - the columns to group by
      Returns:
      a with the groupByColumns and double columns representing the approximate percentile for each remaining column of the input table
    • accumulateDigests

      public static com.tdunning.math.stats.TDigest accumulateDigests(ObjectVector<com.tdunning.math.stats.TDigest> array)
      Accumulate a Vector of TDigests into a single new TDigest.

      Accumulate the digests within the Vector into a single TDigest. The compression factor is one third of the compression factor of the first digest within the array. If the array has only a single element, then that element is returned. If a null array is passed in, null is returned.

      This function is intended to be used for parallelization. The first step is to independently expose a T-Digest aggregation column with the appropriate compression factor on each of a set of sub-tables, using Aggregation.AggTDigest(java.lang.String...) and TableOperations.aggBy(io.deephaven.api.agg.Aggregation). Next, call TableOperations.groupBy(String...) to produce arrays of Digests for each relevant bucket. Once the arrays are created, use this function to accumulate the arrays of digests within an TableOperations.update(java.lang.String...) statement. Finally, you may call the TDigest quantile function (or others) to produce the desired approximate percentile.

      Parameters:
      array - an array of TDigests
      Returns:
      the accumulated TDigests