Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from fluent:master #8

Merged
merged 21 commits into from
Jun 8, 2024

Conversation

pull[bot]
Copy link

@pull pull bot commented Jun 7, 2024

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

Javex and others added 8 commits June 5, 2024 20:06
With the release of GCC 14.1, some previous warnings are now errors,
including the SSL_select_next_proto call in
tls_context_server_alpn_select_callback. To fix the build, add
explicitly type cast.

Signed-off-by: javex <[email protected]>
With the release of GCC 14.1, some previous warnings are now errors,
including the check_event_is_filtered call in
process_events. To fix the build, add explicitly type cast.

Signed-off-by: javex <[email protected]>
* dockerfile: update to bookworm base image

Signed-off-by: Patrick Stephens <[email protected]>

* dockerfile: switch to libssl3

Signed-off-by: Patrick Stephens <[email protected]>

* dockerfile: switch to libffi8

Signed-off-by: Patrick Stephens <[email protected]>

* dockerfile: add libcap2

Signed-off-by: Patrick Stephens <[email protected]>

* dockerfile: switch to libldap-2.5

Signed-off-by: Patrick Stephens <[email protected]>

---------

Signed-off-by: Patrick Stephens <[email protected]>
$ cmake -GNinja -B build/ && cmake --build build/

Results in this error:

    ninja: error: build.ninja:158: bad $-escape (literal $ must be written as $$)

Replacing the $(MAKE) command with make gives us this new error:

    ninja: error: 'backtrace-prefix/lib/libbacktrace.a', needed by 'bin/fluent-bit', missing and no known rule to make it

So fix that by properly defining the BUILD_BYPRODUCTS.
(Also see https://cmake.org/cmake/help/latest/module/ExternalProject.html#build-step-options)

Signed-off-by: Thomas Devoogdt <[email protected]>
cosmo0920 and others added 13 commits June 6, 2024 20:15
Signed-off-by: Eduardo Silva <[email protected]>
As of today, log records can support metadata content in addition to the record
content it self, however for some cases this is not enough if we want to share
metadata across a group of records.

Today we serialize log records in the following pseudo-schema:

 [ [TIMESTAMP, { METADATA }], { RECORD_CONTENT}]

A common chunk contain records in the following way:

 [ [1717104810, {}], {"key": "some value", "number": 12} ]
 [ [1717104811, {"color": "blue" }], {"key": "some value", "number": 13} ]
 [ [1717104812, {"color": "red"  }], {"key": "some value", "number": 14} ]
 [ [1717104813, {"color": "green"}], {"key": "some value", "number": 15} ]
 [ [1717104814, {}], {"key": "some value", "number": 16} ]

In this patch, we are introducing the concept of `groups`, which are implemented
through a new type of records that marks the beginning and the end of the group. To
avoid breaking changes and overall compatibility, we use the TIMESTAMP field to
mark those as special records.

 - start of a group:  [ [ -1, { METADATA }, { RECORD_CONTENT} ]
 - end of a group  :  [ [ -2, { METADATA }, { RECORD_CONTENT} ]

Here is an example where the middle records are group together:

 [ [1717104810, {}], {"key": "some value", "number": 12} ]
 [ [        -1, {"type": "colors"}], {"numbers": true}]
 [ [1717104811, {"color": "blue" }], {"key": "some value", "number": 13} ]
 [ [1717104812, {"color": "red"  }], {"key": "some value", "number": 14} ]
 [ [1717104813, {"color": "green"}], {"key": "some value", "number": 15} ]
 [ [        -2, {}], {}]
 [ [1717104814, {}], {"key": "some value", "number": 16} ]

Iterating the records don't introduce any problems, however is up to the plugins like
inputs, processors, filters and outputs to interpret the group fields or simply skip
them. note: in the next patch the decoder offers a new API to skip groups definitions.

API usage
=========

To manipulate groups, the workflow is as follows:

 1. A group gets initialized (the header is opened)

 2. the user/dev can optionally add content in the METADATA or RECORD_CONTENT fields.

 3. the group header is finalized. Close the header, the group is still open.

 4. Append normal records.

 5. Group is finalized.

These are the new functions available for groups handling:

 int flb_log_event_encoder_group_init(struct flb_log_event_encoder *context);
 int flb_log_event_encoder_group_header_end(struct flb_log_event_encoder *context)
 int flb_log_event_encoder_group_end(struct flb_log_event_encoder *context)

Signed-off-by: Eduardo Silva <[email protected]>
There are cases where the parser might interpret different the incoming
string buffer (e.g: $), by passing a fresh copy of the pattern workaround
the problem.

There are no performance penalties since this happens on the record accessor
context creation, not when doing the lookups.

note: the other workaround might be to tweak the parser which is more work.

Signed-off-by: Eduardo Silva <[email protected]>
In recent patch, we introduced the concept of groups. This patch implements
the following changes to the log event decoder:

- decoder now has a flag to read or skip groups definitions (default: on)
- new API to retrieve the type of record being read

new functions available:

 int flb_log_event_decoder_read_groups(struct flb_log_event_decoder *context,
                                       int read_groups)

 int flb_log_event_decoder_get_record_type(struct flb_log_event *event,
                                           int32_t *type)

record types supported:

 - FLB_LOG_EVENT_NORMAL
 - FLB_LOG_EVENT_GROUP_START
 - FLB_LOG_EVENT_GROUP_END

Signed-off-by: Eduardo Silva <[email protected]>
The following patch, enhance the logs handling by using the new Fluent Bit
groups support for log events. This provides a smooth communication and
translation layer betweeen in_opentelemetry and out_opentelemetry.

This patch also adds a new configuration property called `logs_metadata_key` which
defines which metadata key holds specific OTLP data per record.

The changes are supported for gRPC payloads, upcoming patches will add
support for JSON payloads.

Signed-off-by: Eduardo Silva <[email protected]>
…port

The following patch introduce a big change on how the plugin now process the logs
with OTLP format, the high-level features are:

1. Full metadata support:
  - with the new changes in in_opentelemetry, there is no metadata loss

2. Proper encoding of non-OTLP records as OTLP
  - The plugin is flexible enough to encode any type of log record.

At a low level, we re-architect how resource logs, resource, span_logs, scope and
logs are handled. All of this has been implemented on top of the new Fluent Bit
groups support for logs.

Use cases tested:

 - OTel Collector (gRPC) --> Fluent Bit --> OTel Collector (gRPC)
 - dummy, tail, forward  --> Fluent Bit --> OTel Collector (gRPC)

Signed-off-by: Eduardo Silva <[email protected]>
@ThomasDevoogdt ThomasDevoogdt merged commit b8393bd into ThomasDevoogdt:master Jun 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants