Package io.deephaven.kafka.ingest
Class KafkaIngester
java.lang.Object
io.deephaven.kafka.ingest.KafkaIngester
An ingester that consumes an Apache Kafka topic and a subset of its partitions via one or more
stream consumers
.
This class is an internal implementation detail for io.deephaven.kafka; is not intended to be used directly by client code. It lives in a separate package as a means of code organization.
-
Nested Class Summary
Modifier and TypeClassDescriptionstatic class
A predicate for handling a range of partitions.static class
A predicate for evenly distributing partitions among a set of ingesters.static class
A predicate for handling a single partition. -
Field Summary
Modifier and TypeFieldDescriptionstatic final IntPredicate
Constant predicate that returns true for all partitions.static final IntToLongFunction
static final IntToLongFunction
static final IntToLongFunction
static final long
static final long
static final long
-
Constructor Summary
ConstructorDescriptionKafkaIngester
(@NotNull Logger log, @NotNull Properties props, @NotNull String topic, @NotNull IntPredicate partitionFilter, @NotNull Function<org.apache.kafka.common.TopicPartition, KafkaRecordConsumer> partitionToStreamConsumer, @NotNull KafkaTools.InitialOffsetLookup partitionToInitialSeekOffset, @NotNull org.apache.kafka.common.serialization.Deserializer<?> keyDeserializer, @NotNull org.apache.kafka.common.serialization.Deserializer<?> valueDeserializer, @Nullable KafkaTools.ConsumerLoopCallback consumerLoopCallback) Creates a Kafka ingester for the given topic. -
Method Summary
Modifier and TypeMethodDescriptionvoid
shutdown()
void
shutdownPartition
(int partition) void
start()
Starts a consumer thread which replicates the consumed Kafka messages to Deephaven.toString()
-
Field Details
-
ALL_PARTITIONS
Constant predicate that returns true for all partitions. This is the default, each and every partition that exists will be handled by the same ingester. Because Kafka consumers are inherently single threaded, to scale beyond what a single consumer can handle, you must create multiple consumers each with a subset of partitions usingKafkaIngester.PartitionRange
,KafkaIngester.PartitionRoundRobin
,KafkaIngester.SinglePartition
or a customIntPredicate
. -
SEEK_TO_BEGINNING
public static final long SEEK_TO_BEGINNING- See Also:
-
DONT_SEEK
public static final long DONT_SEEK- See Also:
-
SEEK_TO_END
public static final long SEEK_TO_END- See Also:
-
ALL_PARTITIONS_SEEK_TO_BEGINNING
-
ALL_PARTITIONS_DONT_SEEK
-
ALL_PARTITIONS_SEEK_TO_END
-
-
Constructor Details
-
KafkaIngester
public KafkaIngester(@NotNull @NotNull Logger log, @NotNull @NotNull Properties props, @NotNull @NotNull String topic, @NotNull @NotNull IntPredicate partitionFilter, @NotNull @NotNull Function<org.apache.kafka.common.TopicPartition, KafkaRecordConsumer> partitionToStreamConsumer, @NotNull @NotNull KafkaTools.InitialOffsetLookup partitionToInitialSeekOffset, @NotNull @NotNull org.apache.kafka.common.serialization.Deserializer<?> keyDeserializer, @NotNull @NotNull org.apache.kafka.common.serialization.Deserializer<?> valueDeserializer, @Nullable @Nullable KafkaTools.ConsumerLoopCallback consumerLoopCallback) Creates a Kafka ingester for the given topic.- Parameters:
log
- A log for outputprops
- The properties used to create theKafkaConsumer
topic
- The topic to replicatepartitionFilter
- A predicate indicating which partitions we should replicatepartitionToStreamConsumer
- A function implementing a mapping from partition to its consumer of records. The function will be invoked once per partition at construction; implementations should internally defer resource allocation until first call toKafkaRecordConsumer.consume(long, List)
orStreamFailureConsumer.acceptFailure(Throwable)
if appropriate.partitionToInitialSeekOffset
- A function implementing a mapping from partition to its initial seek offset, or -1 if seek to beginning is intended.keyDeserializer
- , the key deserializer, seeKafkaConsumer(Properties, Deserializer, Deserializer)
valueDeserializer
- , the value deserializer, seeKafkaConsumer(Properties, Deserializer, Deserializer)
consumerLoopCallback
- the consumer loop callback
-
-
Method Details
-
toString
-
start
public void start()Starts a consumer thread which replicates the consumed Kafka messages to Deephaven.This method must not be called more than once on an ingester instance.
-
shutdown
public void shutdown() -
shutdownPartition
public void shutdownPartition(int partition)
-