Package io.deephaven.kafka.ingest
Class MultiFieldChunkAdapter
java.lang.Object
io.deephaven.kafka.ingest.MultiFieldChunkAdapter
- All Implemented Interfaces:
KeyOrValueProcessor
- Direct Known Subclasses:
GenericRecordChunkAdapter
,JsonNodeChunkAdapter
-
Constructor Summary
ModifierConstructorDescriptionprotected
MultiFieldChunkAdapter
(TableDefinition definition, IntFunction<ChunkType> chunkTypeForIndex, Map<String, String> fieldNamesToColumnNames, boolean allowNulls, FieldCopier.Factory fieldCopierFactory) -
Method Summary
Modifier and TypeMethodDescriptionstatic int[]
chunkOffsets
(TableDefinition definition, Map<String, String> fieldNamesToColumnNames) void
handleChunk
(ObjectChunk<Object, Values> inputChunk, WritableChunk<Values>[] publisherChunks) After consuming a set of generic records for a batch that are not raw objects, we pass the keys or values to an appropriate handler.
-
Constructor Details
-
MultiFieldChunkAdapter
protected MultiFieldChunkAdapter(TableDefinition definition, IntFunction<ChunkType> chunkTypeForIndex, Map<String, String> fieldNamesToColumnNames, boolean allowNulls, FieldCopier.Factory fieldCopierFactory)
-
-
Method Details
-
chunkOffsets
public static int[] chunkOffsets(TableDefinition definition, Map<String, String> fieldNamesToColumnNames) -
handleChunk
public void handleChunk(ObjectChunk<Object, Values> inputChunk, WritableChunk<Values>[] publisherChunks) Description copied from interface:KeyOrValueProcessor
After consuming a set of generic records for a batch that are not raw objects, we pass the keys or values to an appropriate handler. The handler must know its data types and offsets within the publisher chunks, and "copy" the data from the inputChunk to the appropriate chunks for the stream publisher.- Specified by:
handleChunk
in interfaceKeyOrValueProcessor
- Parameters:
inputChunk
- the chunk containing the keys or values as Kafka deserialized them from the consumer recordpublisherChunks
- the output chunks for this table that must be appended to.
-