Class PublishToKafka<K,V>
- All Implemented Interfaces:
LogOutputAppendable
,LivenessManager
,LivenessNode
,LivenessReferent
,Serializable
- See Also:
-
Field Summary
-
Constructor Summary
ConstructorDescriptionPublishToKafka
(Properties props, Table table, String defaultTopic, Integer defaultPartition, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, ColumnName topicColumn, ColumnName partitionColumn, ColumnName timestampColumn, boolean publishInitial) Construct a publisher fortable
according the to Kafkaprops
for the suppliedtopic
.PublishToKafka
(Properties props, Table table, String topic, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, boolean publishInitial) Deprecated, for removal: This API element is subject to removal in a future version. -
Method Summary
Modifier and TypeMethodDescriptionprotected void
destroy()
Attempt to release (destructively when necessary) resources held by this object.static Long
timestampMillis
(LongChunk<?> nanosChunk, int index) Methods inherited from class io.deephaven.engine.liveness.LivenessArtifact
manageWithCurrentScope
Methods inherited from class io.deephaven.engine.liveness.ReferenceCountedLivenessNode
getWeakReference, initializeTransientFieldsForLiveness, onReferenceCountAtZero, tryManage, tryUnmanage, tryUnmanage
Methods inherited from class io.deephaven.engine.liveness.ReferenceCountedLivenessReferent
dropReference, tryRetainReference
Methods inherited from class io.deephaven.util.referencecounting.ReferenceCounted
append, decrementReferenceCount, forceReferenceCountToZero, getReferenceCountDebug, incrementReferenceCount, resetReferenceCount, toString, tryDecrementReferenceCount, tryIncrementReferenceCount
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface io.deephaven.engine.liveness.LivenessManager
manage
Methods inherited from interface io.deephaven.engine.liveness.LivenessNode
unmanage, unmanage
Methods inherited from interface io.deephaven.engine.liveness.LivenessReferent
dropReference, getReferentDescription, retainReference, tryRetainReference
-
Field Details
-
CHUNK_SIZE
public static final int CHUNK_SIZE
-
-
Constructor Details
-
PublishToKafka
@Deprecated(forRemoval=true) public PublishToKafka(Properties props, Table table, String topic, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, boolean publishInitial) Deprecated, for removal: This API element is subject to removal in a future version. -
PublishToKafka
public PublishToKafka(Properties props, Table table, String defaultTopic, Integer defaultPartition, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, ColumnName topicColumn, ColumnName partitionColumn, ColumnName timestampColumn, boolean publishInitial) Construct a publisher for
table
according the to Kafkaprops
for the suppliedtopic
.The new publisher will produce records for existing
table
data at construction.If
table
is a dynamic, refreshing table (Table.isRefreshing()
), the calling thread must block theupdate graph
by holding either itsexclusive lock
or itsshared lock
. The publisher will install a listener in order to produce new records as updates become available. Callers must be sure to maintain a reference to the publisher and ensure that it remainslive
. The easiest way to do this may be to construct the publisher enclosed by aliveness scope
withenforceStrongReachability
specified astrue
, andrelease
the scope when publication is no longer needed. For example:// To initiate publication: final LivenessScope publisherScope = new LivenessScope(true); try (final SafeCloseable ignored = LivenessScopeStack.open(publisherScope, false)) { new PublishToKafka(...); } // To cease publication: publisherScope.release();
- Parameters:
props
- The KafkaProperties
table
- The sourceTable
defaultTopic
- The default destination topicdefaultPartition
- The default destination partitionkeyColumns
- Optional array of string column names from table for the columns corresponding to Kafka's Key field.kafkaKeySerializer
- The kafkaSerializer
to use for keyskeyChunkSerializer
- OptionalKeyOrValueSerializer
to consume table data and produce Kafka record keys in chunk-oriented fashionvalueColumns
- Optional array of string column names from table for the columns corresponding to Kafka's Value field.kafkaValueSerializer
- The kafkaSerializer
to use for valuesvalueChunkSerializer
- OptionalKeyOrValueSerializer
to consume table data and produce Kafka record values in chunk-oriented fashionpublishInitial
- If the initial data intable
should be publishedtopicColumn
- The topic column. When set, uses the the givenCharSequence
column fromtable
as the first source for setting the kafka record topic.partitionColumn
- The partition column. When set, uses the the givenint
column fromtable
as the first source for setting the kafka record partition.timestampColumn
- The timestamp column. When set, uses the the givenInstant
column fromtable
as the first source for setting the kafka record timestamp.
-
-
Method Details
-
timestampMillis
-
destroy
protected void destroy()Description copied from class:ReferenceCountedLivenessReferent
Attempt to release (destructively when necessary) resources held by this object. This may render the object unusable for subsequent operations. Implementations should be sure to call super.destroy().This is intended to only ever be used as a side effect of decreasing the reference count to 0.
- Overrides:
destroy
in classReferenceCountedLivenessReferent
-
KafkaTools.produceFromTable(KafkaPublishOptions)