Class KafkaPublishOptions

java.lang.Object
io.deephaven.kafka.KafkaPublishOptions

@Immutable public abstract class KafkaPublishOptions extends Object
The options to produce a Kafka stream from a Deephaven table.
See Also:
  • Constructor Details

    • KafkaPublishOptions

      public KafkaPublishOptions()
  • Method Details

    • builder

      public static KafkaPublishOptions.Builder builder()
    • table

      public abstract Table table()
      The table used as a source of data to be sent to Kafka.
      Returns:
      the table
    • topic

      @Nullable public abstract String topic()
      The default Kafka topic to publish to. When null, topicColumn() must be set.
      Returns:
      the default Kafka topic
      See Also:
    • partition

      public abstract OptionalInt partition()
      The default Kafka partition to publish to.
      Returns:
      the default Kafka partition
      See Also:
    • config

      public abstract Properties config()
      The Kafka configuration properties.
      Returns:
      the Kafka configuration
    • keySpec

      @Default public KafkaTools.Produce.KeyOrValueSpec keySpec()
      The conversion specification for Kafka record keys from table column data. By default, is KafkaTools.Produce.ignoreSpec().
      Returns:
      the key spec
    • valueSpec

      @Default public KafkaTools.Produce.KeyOrValueSpec valueSpec()
      The conversion specification for Kafka record values from table column data. By default, is KafkaTools.Produce.ignoreSpec().
      Returns:
      the value spec
    • lastBy

      @Default public boolean lastBy()
      Whether to publish only the last record for each unique key. If true, the publishing logic will internally perform a lastBy aggregation on table() grouped by the input columns of keySpec(). When true, the keySpec must not be ignoreSpec. By default, is false.
      Returns:
      if the publishing should be done with a last-by table
    • publishInitial

      @Default public boolean publishInitial()
      If the initial data in table() should be published. When false, table() must be refreshing. By default, is true.
      Returns:
      if the initial table data should be published
    • topicColumn

      public abstract Optional<ColumnName> topicColumn()
      The topic column. When set, uses the the given CharSequence-compatible column from table() as the first source for setting the Kafka record topic. When not present, or if the column value is null, topic() will be used.
      Returns:
      the topic column name
    • partitionColumn

      public abstract Optional<ColumnName> partitionColumn()
      The partition column. When set, uses the the given int column from table() as the first source for setting the Kafka record partition. When not present, or if the column value is null, partition() will be used if present. If a valid partition number is specified, that partition will be used when sending the record. Otherwise, Kafka will choose a partition using a hash of the key if the key is present, or will assign a partition in a round-robin fashion if the key is not present.
      Returns:
      the partition column name
    • timestampColumn

      public abstract Optional<ColumnName> timestampColumn()
      The timestamp column. When set, uses the the given Instant column from table() as the first source for setting the Kafka record timestamp. When not present, or if the column value is null, the producer will stamp the record with its current time. The timestamp eventually used by Kafka depends on the timestamp type configured for the topic. If the topic is configured to use CreateTime, the timestamp in the producer record will be used by the broker. If the topic is configured to use LogAppendTime, the timestamp in the producer record will be overwritten by the broker with the broker local time when it appends the message to its log.
      Returns:
      the timestamp column name