Configuring using Fluentbit
Datable supports ingest from JSON HTTP POSTs, which is supported by Fluent Bit and other log forwarders. See the Fluent Bit Getting Started documentation for options and resources.
Configuring Fluent Bit to send Fluent format
Add following snippet to your fluent-bit.conf file to match any log data flowing through fluent and send it to Datable without impacting any existing configurations:
[OUTPUT]
Name http
Match *
URI /v1/logs/fluent
Host YOUR_DATABLE_HOST.dtbl.io
Port 5201
tls On
Format json
Json_date_key timestamp
Json_date_format iso8601
compress gzip
[OUTPUT]
Name http
Match *
URI /v1/logs/fluent
Host YOUR_DATABLE_HOST.dtbl.io
Port 5201
tls On
Format json
Json_date_key timestamp
Json_date_format iso8601
compress gzip
Fluent uses a log
key to store the message payload. Datable assume a log
and a timestamp
key, where log
can be a string or a complex JSON object.
Fluent name | Datable name | Accepted values |
---|---|---|
log | body | string, json, null |
timestamp | timestamp | ISO8601, null |
any key (e.g. foo ) | attributes['foo'] | string, json, null |
Full example
This example tails a file and sends logs every second
[SERVICE]
log_level info
flush 1
[INPUT]
Name tail
Path /example/path/to/log/file.log
Refresh_Interval 5
Rotate_Wait 5
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[OUTPUT]
Name http
Match *
URI /v1/logs/fluent
Host YOUR_DATABLE_HOST.dtbl.io
Port 5201
tls On
Format json
Json_date_key timestamp
Json_date_format iso8601
compress gzip
[SERVICE]
log_level info
flush 1
[INPUT]
Name tail
Path /example/path/to/log/file.log
Refresh_Interval 5
Rotate_Wait 5
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[OUTPUT]
Name http
Match *
URI /v1/logs/fluent
Host YOUR_DATABLE_HOST.dtbl.io
Port 5201
tls On
Format json
Json_date_key timestamp
Json_date_format iso8601
compress gzip
Configuring Fluent Bit to send JSON format
The /v1/logs/fluent
endpoint is set up to receive preprocessed JSON logs. If you already have structured logs and are using Fluent to turn flattened stdout logs into JSON (e.g. AWS ECS Fargate sidecar logs), you'll have a parser configured like this:
[PARSER]
Name docker
Format json
Time_Key timestamp
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
Decode_Field_As json log%
[PARSER]
Name docker
Format json
Time_Key timestamp
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
Decode_Field_As json log%
In which case, we assume a timestamp
and any number of other top level keys
Fluent name | Datable name | Accepted values |
---|---|---|
body | body | string, json, null |
message | body | string, json, null |
timestamp | timestamp | ISO8601, null |
any key (e.g. foo ) | attributes['foo'] | string, json, null |
Please use the /v1/logs/json
endpoint as below:
[OUTPUT]
Name http
Match *
URI /v1/logs/json
Host YOUR_DATABLE_HOST.dtbl.io
Port 5201
tls On
Format json
Json_date_key timestamp
Json_date_format iso8601
compress gzip
[OUTPUT]
Name http
Match *
URI /v1/logs/json
Host YOUR_DATABLE_HOST.dtbl.io
Port 5201
tls On
Format json
Json_date_key timestamp
Json_date_format iso8601
compress gzip
After configuration
After changes to fluent-bit.conf, restart fluentbit. Logs should be visible in the Datable Telemetry page within seconds. Logs generated before this configuration change have already been process by fluent, and will not be sent to Datable.
If you're experiencing problems explore the Troubleshooting, Configuration, and Running a Logging Pipeline Locally sections of the fluentbit website. Start by setting a Standard Out output