consume_to_partitioned_table
The consume_to_partitioned_table method reads a Kafka stream into an in-memory partitioned table.
Syntax
Parameters
| Parameter | Type | Description |
|---|---|---|
| kafka_config | dict | Configuration for the associated Kafka consumer and the resulting table. Once the table-specific properties are stripped, the remaining one is used to call the constructor of |
| topic | str | The Kafka topic name. |
| partitions optional | list[int] |
|
| offsets optional | dict[int, int] |
|
| key_spec optional | KeyValueSpec | Specifies how to map the Key field in Kafka messages to table column(s). Any of the following found in
|
| value_spec optional | KeyValueSpec | Specifies how to map the Value field in Kafka messages to table column(s). Any of the following specifications found in
|
| table_type optional | TableType | One of the following
|
Returns
An in-memory partitioned table.
Examples
In the following example, consume_to_partitioned_table reads the Kafka topic orders into a partitioned table. It uses the JSON format to parse the stream. Since we do not provide partitions or offsets values, the consumer will include all partitions and start from the first message received after the consume_to_partitioned_table call.
The following example, like the one above, reads the JSON-formatted Kafka topic orders into a Partitioned table. This time, though, it uses a Jackson provider object processor specification.