Plugin Name: KafkaInput
Connects to a Kafka broker and subscribes to messages from the specified topic and partition.
Client ID string. Default is the hostname.
List of brokers addresses.
How many times to retry a metadata request when a partition is in the middle of leader election. Default is 3.
How long to wait for leader election to finish between retries (in milliseconds). Default is 250.
How frequently the client will refresh the cluster metadata in the background (in milliseconds). Default is 600000 (10 minutes). Set to 0 to disable.
How many outstanding requests the broker is allowed to have before blocking attempts to send. Default is 4.
How long to wait for the initial connection to succeed before timing out and returning an error (in milliseconds). Default is 60000 (1 minute).
How long to wait for a response before timing out and returning an error (in milliseconds). Default is 60000 (1 minute).
How long to wait for a transmit to succeed before timing out and returning an error (in milliseconds). Default is 60000 (1 minute).
Kafka topic (must be set).
Kafka topic partition. Default is 0.
A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group. Default is the id.
The default (maximum) amount of data to fetch from the broker in each request. The default is 32768 bytes.
The minimum amount of data to fetch in a request - the broker will wait until at least this many bytes are available. The default is 1, as 0 causes the consumer to spin when no messages are available.
The maximum permittable message size - messages larger than this will return MessageTooLarge. The default of 0 is treated as no limit.
The maximum amount of time the broker will wait for min_fetch_size bytes to become available before it returns fewer than that anyways. The default is 250ms, since 0 causes the consumer to spin when no events are available. 100-500ms is a reasonable range for most cases.
The method used to determine at which offset to begin consuming messages. The valid values are:
The number of events to buffer in the Events channel. Having this non-zero permits the consumer to continue fetching messages in the background while client code consumes events, greatly improving throughput. The default is 16.
Example 1: Read Fxa messages from partition 0.
[FxaKafkaInputTest] type = "KafkaInput" topic = "Fxa" addrs = ["localhost:9092"]
Example 2: Send messages between two Heka instances via a Kafka broker.
# On the producing instance [KafkaOutputExample] type = "KafkaOutput" message_matcher = "TRUE" topic = "heka" addrs = ["kafka-broker:9092"] encoder = "ProtobufEncoder"
# On the consuming instance [KafkaInputExample] type = "KafkaInput" topic = "heka" addrs = ["kafka-broker:9092"] splitter = "KafkaSplitter" decoder = "ProtobufDecoder" [KafkaSplitter] type = "NullSplitter" use_message_bytes = true