Class PublishToKafka<K,V>

All Implemented Interfaces:
LogOutputAppendable, LivenessManager, LivenessNode, LivenessReferent, Serializable

@InternalUseOnly public class PublishToKafka<K,V> extends LivenessArtifact
This class is an internal implementation detail for io.deephaven.kafka; is not intended to be used directly by client code. It lives in a separate package as a means of code organization.
See Also:
  • Field Details

    • CHUNK_SIZE

      public static final int CHUNK_SIZE
  • Constructor Details

    • PublishToKafka

      @Deprecated(forRemoval=true) public PublishToKafka(Properties props, Table table, String topic, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, boolean publishInitial)
      Deprecated, for removal: This API element is subject to removal in a future version.
    • PublishToKafka

      public PublishToKafka(Properties props, Table table, String defaultTopic, Integer defaultPartition, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, ColumnName topicColumn, ColumnName partitionColumn, ColumnName timestampColumn, boolean publishInitial)

      Construct a publisher for table according the to Kafka props for the supplied topic.

      The new publisher will produce records for existing table data at construction.

      If table is a dynamic, refreshing table (Table.isRefreshing()), the calling thread must block the update graph by holding either its exclusive lock or its shared lock. The publisher will install a listener in order to produce new records as updates become available. Callers must be sure to maintain a reference to the publisher and ensure that it remains live. The easiest way to do this may be to construct the publisher enclosed by a liveness scope with enforceStrongReachability specified as true, and release the scope when publication is no longer needed. For example:

           // To initiate publication:
           final LivenessScope publisherScope = new LivenessScope(true);
           try (final SafeCloseable ignored = LivenessScopeStack.open(publisherScope, false)) {
               new PublishToKafka(...);
           }
           // To cease publication:
           publisherScope.release();
       
      Parameters:
      props - The Kafka Properties
      table - The source Table
      defaultTopic - The default destination topic
      defaultPartition - The default destination partition
      keyColumns - Optional array of string column names from table for the columns corresponding to Kafka's Key field.
      kafkaKeySerializer - The kafka Serializer to use for keys
      keyChunkSerializer - Optional KeyOrValueSerializer to consume table data and produce Kafka record keys in chunk-oriented fashion
      valueColumns - Optional array of string column names from table for the columns corresponding to Kafka's Value field.
      kafkaValueSerializer - The kafka Serializer to use for values
      valueChunkSerializer - Optional KeyOrValueSerializer to consume table data and produce Kafka record values in chunk-oriented fashion
      publishInitial - If the initial data in table should be published
      topicColumn - The topic column. When set, uses the the given CharSequence column from table as the first source for setting the kafka record topic.
      partitionColumn - The partition column. When set, uses the the given int column from table as the first source for setting the kafka record partition.
      timestampColumn - The timestamp column. When set, uses the the given Instant column from table as the first source for setting the kafka record timestamp.
  • Method Details

    • timestampMillis

      public static Long timestampMillis(LongChunk<?> nanosChunk, int index)
    • destroy

      protected void destroy()
      Description copied from class: ReferenceCountedLivenessReferent
      Attempt to release (destructively when necessary) resources held by this object. This may render the object unusable for subsequent operations. Implementations should be sure to call super.destroy().

      This is intended to only ever be used as a side effect of decreasing the reference count to 0.

      Overrides:
      destroy in class ReferenceCountedLivenessReferent