forked from fluent/fluent-bit
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] master from fluent:master #11
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: lecaros <[email protected]>
) Signed-off-by: lecaros <[email protected]>
Only extracting the SID when is used for normal data cases. For string inserts, we needn't replace with the actual data because replacing with actual user account's domain and its name causes breaking the relationship of corresponding string interpolated message and the element of string inserts. Signed-off-by: Hiroshi Hatake <[email protected]>
* in_splunk: splunk_prot: Fix string in http response code 400. Signed-off-by: lecaros <[email protected]> --------- Signed-off-by: lecaros <[email protected]>
In forwarded events, the associated publisher metadata is not existing. So, we can permit the associated metadata as NULL. Signed-off-by: Hiroshi Hatake <[email protected]>
Signed-off-by: Hiroshi Hatake <[email protected]>
…opes The following patch extends the processor to allow to modify the resources and scopes of Logs generated by an OpenTelemetry source. The following new contexts are supported: - otel_resource_attributes: alter resource attributes - otel_scope_name: manipulate the scope name - otel_scope_version: manipulate the scope version - otel_scope_attributes: alter the scope attributes example: ----- fluent-bit.yaml ----- pipeline: inputs: - name: opentelemetry port: ${FLUENT_BIT_TEST_LISTENER_PORT} processors: logs: - name: content_modifier context: otel_resource_attributes action: upsert key: "new_attr" value: "my_val" - name: content_modifier context: otel_resource_attributes action: delete key: "service.name" - name: content_modifier context: otel_scope_attributes action: upsert key: "my_new_scope_attr" value: "123" - name: content_modifier context: otel_scope_name action: upsert value: "new scope name" - name: content_modifier context: otel_scope_version action: upsert value: "3.1.0" outputs: - name: stdout match: '*' - name: opentelemetry match: '*' host: 127.0.0.1 port: ${TEST_SUITE_HTTP_PORT} ----- end of file ----- Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Christian Menges <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
…ource as OTel When collecting data from all plugins except in_opentelemetry, the records comes with basic timestamp, metadata and content; there are cases where this information collected needs to send to an OpenTelemetry endpoint (vendor or another OTel compatible endpoint) and packaging with proper OTel Log Resources and Scopes simplify the data transformation. This processor creates the internal group with OTel basic structure. Note that this creates the envelope, for further processing can be used in conjunction with content modifier processor. usage example: pipeline: inputs: - name: dummy samples: 1 processors: logs: - name: opentelemetry_envelope - name: content_modifier context: otel_resource_attributes action: upsert key: "aaa" value: "bbb" outputs: - name : stdout match: '*' - name: opentelemetry match: '*' host: 127.0.0.1 port: 4318 logs_uri: /v1/logs Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Tim Birkett <[email protected]>
Signed-off-by: Tim Birkett <[email protected]>
Potentially breaking change as it now requires the rbac used by fluent-bit to have 'watch'. Uses a k8s watch instead of http api polling to stream k8s events from the kube api server Signed-off-by: ryanohnemus <[email protected]>
Signed-off-by: ryanohnemus <[email protected]>
Signed-off-by: ryanohnemus <[email protected]>
This patch adds a third value to `drop_single_key` - `raw`, which allows sending unquoted strings to Loki when using JSON as the `line_format`. While yes, for the output to be valid JSON, quotes would be expected, Loki does not support reading a plain quoted string with its JSON parser, complaining that it cannot find a `}` character. Instead, you need to use a combination of regexp and line_format expressions to unquote the log before running any other parsers over it. By adding a third value of `raw`, this ensures backwards compatibility for anyone that is already relying on the existing behaviour. Signed-off-by: Andrew Titmuss <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
…ezone Signed-off-by: Eduardo Silva <[email protected]>
…ezone Signed-off-by: Eduardo Silva <[email protected]>
…m timezone Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
This PR adds support for setting a static hostname in the Datadog output plugin. This field is analogous to the existing `dd_service` and `dd_source` configuration options that can be used to set a static value. If unset, the default behavior is backwards compatible. This behavior is to not set an explicity `hostname` field, but if the record has a field that is detected as the hostname in Datadog (such as `host` or `syslog.hostname`), it will be picked up. Closes: #8971 Signed-off-by: Jesse Szwedko <[email protected]>
* Add logging in cases of opentelemetry metric+trace decode fail * Return error val in case of opentelemetry metric payload decode failure * Update opentelemetry HTTP server to return deserialisation error message in non-ok handling cases --------- Signed-off-by: Stewart Webb <[email protected]>
Signed-off-by: Hiroshi Hatake <[email protected]>
Signed-off-by: Hiroshi Hatake <[email protected]>
Signed-off-by: Hiroshi Hatake <[email protected]>
Signed-off-by: Hiroshi Hatake <[email protected]>
--------- Signed-off-by: Meet <[email protected]>
…8778) --------- Signed-off-by: Marcus Hufvudsson <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
* metrics: Allocate metrics' titles dynamically However, still aded limitations over 1024 characters to prevent a waste of memory consumptions for title names. Signed-off-by: Hiroshi Hatake <[email protected]>
) * in_winetvlog: Handle formatting and not mapped error properly --------- Signed-off-by: Hiroshi Hatake <[email protected]>
Fixes #8927. This does **not** remove the ability to send raw events, i.e. using `Splunk_Send_Raw On`, but rather sends them to correct endpoint. Signed-off-by: Philip Meier <[email protected]>
) When a client uses HTTP/1.1 protocol version, we are only setting the content-length at special cases, however if the caller like in_splunk, in_http or other sets a response body, the header is not set, so the client "have to assume" the response of the request is either none or just hangs and wait for some bytes. This patch forces to format a content-length header when the request comes from a HTTP/1.1 session This fix issue #9010. Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
This patch, add extra checks per protocol version and user configuration based on the net_setup flag passed so it can honor when to keep the connection persistent and when it must be closed, this fixes the issues associated with different setups for: - http2: off - http2: on - net.keepalive: on - net.keepalive: off This PR is a continuation of work that started on troubleshooting #9010. Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Eduardo Silva <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )