mozilla

Outputs

Common Output Parameters

There are some configuration options that are universally available to all Heka output plugins. These will be consumed by Heka itself when Heka initializes the plugin and do not need to be handled by the plugin-specific initialization code.

  • message_matcher (string, optional):

    Boolean expression, when evaluated to true passes the message to the filter for processing. Defaults to matching nothing. See: Message Matcher Syntax

  • message_signer (string, optional):

    The name of the message signer. If specified only messages with this signer are passed to the filter for processing.

  • ticker_interval (uint, optional):

    Frequency (in seconds) that a timer event will be sent to the filter. Defaults to not sending timer events.

  • encoder (string, optional):

    New in version 0.6.

    Encoder to be used by the output. This should refer to the name of an encoder plugin section that is specified elsewhere in the TOML configuration. Messages can be encoded using the specified encoder by calling the OutputRunner’s Encode() method.

  • use_framing (bool, optional):

    New in version 0.6.

    Specifies whether or not Heka’s Stream Framing should be applied to the binary data returned from the OutputRunner’s Encode() method.

  • can_exit (bool, optional)

    New in version 0.7.

    Whether or not this plugin can exit without causing Heka to shutdown. Defaults to false.

AMQPOutput

Connects to a remote AMQP broker (RabbitMQ) and sends messages to the specified queue. The message is serialized if specified, otherwise only the raw payload of the message will be sent. As AMQP is dynamically programmable, the broker topology needs to be specified.

Config:

  • url (string):

    An AMQP connection string formatted per the RabbitMQ URI Spec.

  • exchange (string):

    AMQP exchange name

  • exchange_type (string):

    AMQP exchange type (fanout, direct, topic, or headers).

  • exchange_durability (bool):

    Whether the exchange should be configured as a durable exchange. Defaults to non-durable.

  • exchange_auto_delete (bool):

    Whether the exchange is deleted when all queues have finished and there is no publishing. Defaults to auto-delete.

  • routing_key (string):

    The message routing key used to bind the queue to the exchange. Defaults to empty string.

  • persistent (bool):

    Whether published messages should be marked as persistent or transient. Defaults to non-persistent.

  • retries (RetryOptions, optional):

    A sub-section that specifies the settings to be used for restart behavior. See Configuring Restarting Behavior

New in version 0.6.

  • content_type (string):

    MIME content type of the payload used in the AMQP header. Defaults to “application/hekad”.

  • encoder (string, optional)

    Specifies which of the registered encoders should be used for converting Heka messages to binary data that is sent out over the AMQP connection. Defaults to the always available “ProtobufEncoder”.

  • use_framing (bool, optional):

    Specifies whether or not the encoded data sent out over the TCP connection should be delimited by Heka’s Stream Framing. Defaults to true.

New in version 0.6.

  • tls (TlsConfig):

    An optional sub-section that specifies the settings to be used for any SSL/TLS encryption. This will only have any impact if URL uses the AMQPS URI scheme. See Configuring TLS.

Example (that sends log lines from the logger):

[AMQPOutput]
url = "amqp://guest:guest@rabbitmq/"
exchange = "testout"
exchange_type = "fanout"
message_matcher = 'Logger == "TestWebserver"'

CarbonOutput

CarbonOutput plugins parse the “stat metric” messages generated by a StatAccumulator and write the extracted counter, timer, and gauge data out to a graphite compatible carbon daemon. Output is written over a TCP or UDP socket using the plaintext protocol.

Config:

  • address (string):

    An IP address:port on which this plugin will write to. (default: “localhost:2003”)

New in version 0.5.

  • protocol (string):

    “tcp” or “udp” (default: “tcp”)

  • tcp_keep_alive (bool)

    if set, keep the TCP connection open and reuse it until a failure; then retry (default: false)

Example:

[CarbonOutput]
message_matcher = "Type == 'heka.statmetric'"
address = "localhost:2003"
protocol = "udp"

DashboardOutput

Specialized output plugin that listens for certain Heka reporting message types and generates JSON data which is made available via HTTP for use in web based dashboards and health reports.

Config:

  • ticker_interval (uint):

    Specifies how often, in seconds, the dashboard files should be updated. Defaults to 5.

  • message_matcher (string):

    Defaults to “Type == ‘heka.all-report’ || Type == ‘heka.sandbox-output’ || Type == ‘heka.sandbox-terminated’”. Not recommended to change this unless you know what you’re doing.

  • address (string):

    An IP address:port on which we will serve output via HTTP. Defaults to “0.0.0.0:4352”.

  • working_directory (string):

    File system directory into which the plugin will write data files and from which it will serve HTTP. The Heka process must have read / write access to this directory. Relative paths will be evaluated relative to the Heka base directory. Defaults to $(BASE_DIR)/dashboard.

  • static_directory (string):

    File system directory where the Heka dashboard source code can be found. The Heka process must have read access to this directory. Relative paths will be evaluated relative to the Heka base directory. Defaults to ${SHARE_DIR}/dasher.

New in version 0.7.

  • headers (subsection, optional):

    It is possible to inject arbitrary HTTP headers into each outgoing response by adding a TOML subsection entitled “headers” to you HttpOutput config section. All entries in the subsection must be a list of string values.

Example:

[DashboardOutput]
ticker_interval = 30

ElasticSearchOutput

Output plugin that uses HTTP or UDP to insert records into an ElasticSearch database. Note that it is up to the specified encoder to both serialize the message into a JSON structure and to prepend that with the appropriate ElasticSearch BulkAPI indexing JSON. Usually this output is used in conjunction with an ElasticSearch-specific encoder plugin, such as ESJsonEncoder, ESLogstashV0Encoder, or ESPayloadEncoder.

Config:

  • flush_interval (int):

    Interval at which accumulated messages should be bulk indexed into ElasticSearch, in milliseconds. Defaults to 1000 (i.e. one second).

  • flush_count (int):

    Number of messages that, if processed, will trigger them to be bulk indexed into ElasticSearch. Defaults to 10.

  • server (string):

    ElasticSearch server URL. Supports http://, https:// and udp:// urls. Defaults to “http://localhost:9200”.

  • http_timeout (int):

    Time in milliseconds to wait for a response for each http post to ES. This may drop data as there is currently no retry. Default is 0 (no timeout).

Example:

[ElasticSearchOutput]
message_matcher = "Type == 'sync.log'"
server = "http://es-server:9200"
flush_interval = 5000
flush_count = 10
encoder = "ESJsonEncoder"

FileOutput

Writes message data out to a file system.

Config:

  • path (string):

    Full path to the output file.

  • perm (string, optional):

    File permission for writing. A string of the octal digit representation. Defaults to “644”.

  • folder_perm (string, optional):

    Permissions to apply to directories created for FileOutput’s parent directory if it doesn’t exist. Must be a string representation of an octal integer. Defaults to “700”.

  • flush_interval (uint32, optional):

    Interval at which accumulated file data should be written to disk, in milliseconds (default 1000, i.e. 1 second). Set to 0 to disable.

  • flush_count (uint32, optional):

    Number of messages to accumulate until file data should be written to disk (default 1, minimum 1).

  • flush_operator (string, optional):

    Operator describing how the two parameters “flush_interval” and “flush_count” are combined. Allowed values are “AND” or “OR” (default is “AND”).

New in version 0.6.

  • use_framing (bool, optional):

    Specifies whether or not the encoded data sent out over the TCP connection should be delimited by Heka’s Stream Framing. Defaults to true if a ProtobufEncoder is used, false otherwise.

Example:

[counter_file]
type = "FileOutput"
message_matcher = "Type == 'heka.counter-output'"
path = "/var/log/heka/counter-output.log"
prefix_ts = true
perm = "666"
flush_count = 100
flush_operator = "OR"
encoder = "PayloadEncoder"

New in version 0.6.

HttpOutput

A very simple output plugin that uses HTTP GET, POST, or PUT requests to deliver data to an HTTP endpoint. When using POST or PUT request methods the encoded output will be uploaded as the request body. When using GET the encoded output will be ignored.

This output doesn’t support any request batching; each received message will generate an HTTP request. Batching can be achieved by use of a filter plugin that accumulates message data, periodically emitting a single message containing the batched, encoded HTTP request data in the payload. An HttpOutput can then be configured to capture these batch messages, using a PayloadEncoder to extract the message payload.

For now the HttpOutput only supports statically defined request parameters (URL, headers, auth, etc.). Future iterations will provide a mechanism for dynamically specifying these values on a per-message basis.

Config:

  • address (string):

    URL address of HTTP server to which requests should be sent. Must begin with “http://” or “https://”.

  • method (string, optional):

    HTTP request method to use, must be one of GET, POST, or PUT. Defaults to POST.

  • username (string, optional):

    If specified, HTTP Basic Auth will be used with the provided user name.

  • password (string, optional):

    If specified, HTTP Basic Auth will be used with the provided password.

  • headers (subsection, optional):

    It is possible to inject arbitrary HTTP headers into each outgoing request by adding a TOML subsection entitled “headers” to you HttpOutput config section. All entries in the subsection must be a list of string values.

  • tls (subsection, optional):

    A sub-section that specifies the settings to be used for any SSL/TLS encryption. This will only have any impact if an “https://” address is used. See Configuring TLS.

Example:

[PayloadEncoder]

[influxdb]
message_matcher = "Type == 'influx.formatted'"
address = "http://influxdb.example.com:8086/db/stats/series"
encoder = "PayloadEncoder"
username = "MyUserName"
password = "MyPassword"

IrcOutput

Connects to an Irc Server and sends messages to the specified Irc channels. Output is encoded using the specified encoder, and expects output to be properly truncated to fit within the bounds of an Irc message before being receiving the output.

Config:

  • server (string):

    A host:port of the irc server that Heka will connect to for sending output.

  • nick (string):

    Irc nick used by Heka.

  • ident (string):

    The Irc identity used to login with by Heka.

  • password (string, optional):

    The password used to connect to the Irc server.

  • channels (list of strings):

    A list of Irc channels which every matching Heka message is sent to. If there is a space in the channel string, then the part after the space is expected to be a password for a protected irc channel.

  • timeout (uint, optional):

    The maximum amount of time (in seconds) to wait before timing out when connect, reading, or writing to the Irc server. Defaults to 10.

  • tls (TlsConfig, optional):

    A sub-section that specifies the settings to be used for any SSL/TLS encryption. This will only have any impact if use_tls is set to true. See Configuring TLS.

  • queue_size (uint, optional):

    This is the maximum amount of messages Heka will queue per Irc channel before discarding messages. There is also a queue of the same size used if all per-irc channel queues are full. This is used when Heka is unable to send a message to an Irc channel, such as when it hasn’t joined or has been disconnected. Defaults to 100.

  • rejoin_on_kick (bool, optional):

    Set this if you want Heka to automatically re-join an Irc channel after being kicked. If not set, and Heka is kicked, it will not attempt to rejoin ever. Defaults to false.

  • ticker_interval (uint, optional):

    How often (in seconds) heka should send a message to the server. This is on a per message basis, not per channel. Defaults to 2.

  • time_before_reconnect (uint, optional):

    How long to wait (in seconds) before reconnecting to the Irc server after being disconnected. Defaults to 3.

  • time_before_rejoin (uint, optional):

    How long to wait (in seconds) before attempting to rejoin an Irc channel which is full. Defaults to 3.

  • max_join_retries (uint, optional):

    The maximum amount of attempts Heka will attempt to join an Irc channel before giving up. After attempts are exhausted, Heka will no longer attempt to join the channel. Defaults to 3.

  • verbose_irc_logging (bool, optional):

    Enable to see raw internal message events Heka is receiving from the server. Defaults to false.

  • encoder (string):

    Specifies which of the registered encoders should be used for converting Heka messages into what is sent to the irc channels.

  • retries (RetryOptions, optional):

    A sub-section that specifies the settings to be used for restart behavior. See Configuring Restarting Behavior

Example:

[IrcOutput]
message_matcher = 'Type == "alert"'
encoder = "PayloadEncoder"
server = "irc.mozilla.org:6667"
nick = "heka_bot"
ident "heka_ident"
channels = [ "#heka_bot_irc testkeypassword" ]
rejoin_on_kick = true
queue_size = 200
ticker_interval = 1

LogOutput

Logs messages to stdout using Go’s log package.

Config:

<none>

Example:

[counter_output]
type = "LogOutput"
message_matcher = "Type == 'heka.counter-output'"
encoder = "PayloadEncoder"

NagiosOutput

Specialized output plugin that listens for Nagios external command message types and delivers passive service check results to Nagios using either HTTP requests made to the Nagios cmd.cgi API or the use of the send_ncsa binary. The message payload must consist of a state followed by a colon and then the message e.g., “OK:Service is functioning properly”. The valid states are: OK|WARNING|CRITICAL|UNKNOWN. Nagios must be configured with a service name that matches the Heka plugin instance name and the hostname where the plugin is running.

Config:

  • url (string, optional):

    An HTTP URL to the Nagios cmd.cgi. Defaults to http://localhost/nagios/cgi-bin/cmd.cgi.

  • username (string, optional):

    Username used to authenticate with the Nagios web interface. Defaults to empty string.

  • password (string, optional):

    Password used to authenticate with the Nagios web interface. Defaults to empty string.

  • response_header_timeout (uint, optional):

    Specifies the amount of time, in seconds, to wait for a server’s response headers after fully writing the request. Defaults to 2.

  • nagios_service_description (string, optional):

    Must match Nagios service’s service_description attribute. Defaults to the name of the output.

  • nagios_host (string, optional):

    Must match the hostname of the server in nagios. Defaults to the Hostname attribute of the message.

  • send_nsca_bin (string, optional):

    New in version 0.5.

    Use send_nsca program, as provided, rather than sending HTTP requests. Not supplying this value means HTTP will be used, and any other send_nsca_* settings will be ignored.

  • send_nsca_args ([]string, optional):

    New in version 0.5.

    Arguments to use with send_nsca, usually at least the nagios hostname, e.g. [“-H”, “nagios.somehost.com”]. Defaults to an empty list.

  • send_nsca_timeout (int, optional):

    New in version 0.5.

    Timeout for the send_nsca command, in seconds. Defaults to 5.

  • use_tls (bool, optional):

    New in version 0.5.

    Specifies whether or not SSL/TLS encryption should be used for the TCP connections. Defaults to false.

  • tls (TlsConfig, optional):

    New in version 0.5.

    A sub-section that specifies the settings to be used for any SSL/TLS encryption. This will only have any impact if use_tls is set to true. See Configuring TLS.

Example configuration to output alerts from SandboxFilter plugins:

[NagiosOutput]
url = "http://localhost/nagios/cgi-bin/cmd.cgi"
username = "nagiosadmin"
password = "nagiospw"
message_matcher = "Type == 'heka.sandbox-output' && Fields[payload_type] == 'nagios-external-command' && Fields[payload_name] == 'PROCESS_SERVICE_CHECK_RESULT'"

Example Lua code to generate a Nagios alert:

inject_payload("nagios-external-command", "PROCESS_SERVICE_CHECK_RESULT", "OK:Alerts are working!")

SmtpOutput

New in version 0.5.

Outputs a Heka message in an email. The message subject is the plugin name and the message content is controlled by the payload_only setting. The primary purpose is for email alert notifications e.g., PagerDuty.

Config:

  • send_from (string)

    The email address of the sender. (default: “heka@localhost.localdomain”)

  • send_to (array of strings)

    An array of email addresses where the output will be sent to.

  • subject (string)

    Custom subject line of email. (default: “Heka [SmtpOutput]”)

  • host (string)

    SMTP host to send the email to (default: “127.0.0.1:25”)

  • auth (string)

    SMTP authentication type: “none”, “Plain”, “CRAMMD5” (default: “none”)

  • user (string, optional)

    SMTP user name

  • password (string, optional)

    SMTP user password

Example:

[FxaAlert]
type = "SmtpOutput"
message_matcher = "((Type == 'heka.sandbox-output' && Fields[payload_type] == 'alert') || Type == 'heka.sandbox-terminated') && Logger =~ /^Fxa/"
send_from = "heka@example.com"
send_to = ["alert@example.com"]
auth = "Plain"
user = "test"
password = "testpw"
host = "localhost:25"
encoder = "AlertEncoder"

TcpOutput

Output plugin that delivers Heka message data to a listening TCP connection. Can be used to deliver messages from a local running Heka agent to a remote Heka instance set up as an aggregator and/or router, or to any other arbitrary listening TCP server that knows how to process the encoded data.

Config:

  • address (string):

    An IP address:port to which we will send our output data.

  • use_tls (bool, optional):

    Specifies whether or not SSL/TLS encryption should be used for the TCP connections. Defaults to false.

New in version 0.5.

  • tls (TlsConfig, optional):

    A sub-section that specifies the settings to be used for any SSL/TLS encryption. This will only have any impact if use_tls is set to true. See Configuring TLS.

  • ticker_interval (uint, optional):

    Specifies how often, in seconds, the output queue files are rolled. Defaults to 300.

New in version 0.6.

  • local_address (string, optional):

    A local IP address to use as the source address for outgoing traffic to this destination. Cannot currently be combined with TLS connections.

  • encoder (string, optional):

    Specifies which of the registered encoders should be used for converting Heka messages to binary data that is sent out over the TCP connection. Defaults to the always available “ProtobufEncoder”.

  • use_framing (bool, optional):

    Specifies whether or not the encoded data sent out over the TCP connection should be delimited by Heka’s Stream Framing. Defaults to true if a ProtobufEncoder is used, false otherwise.

  • keep_alive (bool):

    Specifies whether or not TCP keepalive should be used for established TCP connections. Defaults to false.

  • keep_alive_period (int):

    Time duration in seconds that a TCP connection will be maintained before keepalive probes start being sent. Defaults to 7200 (i.e. 2 hours).

Example:

[aggregator_output]
type = "TcpOutput"
address = "heka-aggregator.mydomain.com:55"
local_address = "127.0.0.1"
message_matcher = "Type != 'logfile' && Type != 'heka.counter-output' && Type != 'heka.all-report'"

New in version 0.7.

UdpOutput

Output plugin that delivers Heka message data to a specified UDP or Unix datagram socket location.

Config:

  • net (string, optional):

    Network type to use for communication. Must be one of “udp”, “udp4”, “udp6”, or “unixgram”. “unixgram” option only available on systems that support Unix datagram sockets. Defaults to “udp”.

  • address (string):

    Address to which we will be sending the data. Must be IP:port for net types of “udp”, “udp4”, or “udp6”. Must be a path to a Unix datagram socket file for net type “unixgram”.

  • local_address (string, optional):

    Local address to use on the datagram packets being generated. Must be IP:port for net types of “udp”, “udp4”, or “udp6”. Must be a path to a Unix datagram socket file for net type “unixgram”.

  • encoder (string):

    Name of registered encoder plugin that will extract and/or serialized data from the Heka message.

Example:

[PayloadEncoder]

[UdpOutput]
address = "myserver.example.com:34567"
encoder = "PayloadEncoder"

WhisperOutput

WhisperOutput plugins parse the “statmetric” messages generated by a StatAccumulator and write the extracted counter, timer, and gauge data out to a graphite compatible whisper database file tree structure.

Config:

  • base_path (string):

    Path to the base directory where the whisper file tree will be written. Absolute paths will be honored, relative paths will be calculated relative to the Heka base directory. Defaults to “whisper” (i.e. “$(BASE_DIR)/whisper”).

  • default_agg_method (int):

    Default aggregation method to use for each whisper output file. Supports the following values:

    1. Unknown aggregation method.
    2. Aggregate using averaging. (default)
    3. Aggregate using summation.
    4. Aggregate using last received value.
    5. Aggregate using maximum value.
    6. Aggregate using minimum value.
  • default_archive_info ([][]int):

    Default specification for new whisper db archives. Should be a sequence of 3-tuples, where each tuple describes a time interval’s storage policy: [<offset> <# of secs per datapoint> <# of datapoints>] (see whisper docs for more info). Defaults to:

    [ [0, 60, 1440], [0, 900, 8], [0, 3600, 168], [0, 43200, 1456]]
    

    The above defines four archive sections. The first uses 60 seconds for each of 1440 data points, which equals one day of retention. The second uses 15 minutes for each of 8 data points, for two hours of retention. The third uses one hour for each of 168 data points, or 7 days of retention. Finally, the fourth uses 12 hours for each of 1456 data points, representing two years of data.

  • folder_perm (string):

    Permission mask to be applied to folders created in the whisper database file tree. Must be a string representation of an octal integer. Defaults to “700”.

Example:

[WhisperOutput]
message_matcher = "Type == 'heka.statmetric'"
default_agg_method = 3
default_archive_info = [ [0, 30, 1440], [0, 900, 192], [0, 3600, 168], [0, 43200, 1456] ]
folder_perm = "755"