Package io.deephaven.kafka
Class KafkaPublishOptions
java.lang.Object
io.deephaven.kafka.KafkaPublishOptions
The options to produce a Kafka stream from a Deephaven table.
-
Nested Class Summary
Nested Classes -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionstatic KafkaPublishOptions.Builder
builder()
abstract Properties
config()
The Kafka configuration properties.keySpec()
The conversion specification for Kafka record keys from table column data.boolean
lastBy()
Whether to publish only the last record for each unique key.abstract OptionalInt
The default Kafka partition to publish to.abstract Optional<ColumnName>
The partition column.boolean
If the initial data intable()
should be published.abstract Table
table()
The table used as a source of data to be sent to Kafka.abstract Optional<ColumnName>
The timestamp column.abstract String
topic()
The default Kafka topic to publish to.abstract Optional<ColumnName>
The topic column.The conversion specification for Kafka record values from table column data.
-
Constructor Details
-
KafkaPublishOptions
public KafkaPublishOptions()
-
-
Method Details
-
builder
-
table
The table used as a source of data to be sent to Kafka.- Returns:
- the table
-
topic
The default Kafka topic to publish to. Whennull
,topicColumn()
must be set.- Returns:
- the default Kafka topic
- See Also:
-
partition
The default Kafka partition to publish to.- Returns:
- the default Kafka partition
- See Also:
-
config
The Kafka configuration properties.- Returns:
- the Kafka configuration
-
keySpec
The conversion specification for Kafka record keys from table column data. By default, isKafkaTools.Produce.ignoreSpec()
.- Returns:
- the key spec
-
valueSpec
The conversion specification for Kafka record values from table column data. By default, isKafkaTools.Produce.ignoreSpec()
.- Returns:
- the value spec
-
lastBy
@Default public boolean lastBy()Whether to publish only the last record for each unique key. Iftrue
, the publishing logic will internally perform alastBy
aggregation ontable()
grouped by the input columns ofkeySpec()
. Whentrue
, thekeySpec
must not beignoreSpec
. By default, isfalse
.- Returns:
- if the publishing should be done with a last-by table
-
publishInitial
@Default public boolean publishInitial()If the initial data intable()
should be published. Whenfalse
,table()
must berefreshing
. By default, istrue
.- Returns:
- if the initial table data should be published
-
topicColumn
The topic column. When set, uses the the givenCharSequence
-compatible column fromtable()
as the first source for setting the Kafka record topic. When not present, or if the column value is null,topic()
will be used.- Returns:
- the topic column name
-
partitionColumn
The partition column. When set, uses the the givenint
column fromtable()
as the first source for setting the Kafka record partition. When not present, or if the column value is null,partition()
will be used if present. If a valid partition number is specified, that partition will be used when sending the record. Otherwise, Kafka will choose a partition using a hash of the key if the key is present, or will assign a partition in a round-robin fashion if the key is not present.- Returns:
- the partition column name
-
timestampColumn
The timestamp column. When set, uses the the givenInstant
column fromtable()
as the first source for setting the Kafka record timestamp. When not present, or if the column value is null, the producer will stamp the record with its current time. The timestamp eventually used by Kafka depends on the timestamp type configured for the topic. If the topic is configured to use CreateTime, the timestamp in the producer record will be used by the broker. If the topic is configured to use LogAppendTime, the timestamp in the producer record will be overwritten by the broker with the broker local time when it appends the message to its log.- Returns:
- the timestamp column name
-