-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support to log directly into Elasticsearch #7
Comments
Is that really a pattern we want to promote? While I see the convenience of being able to skip Filebeat it has the downsides of:
|
Another thing is that we'd be missing the metadata Filebeat appends to the logs, for example, the host, docker and Kubernetes metadata as well as the index template provided by Filebeat. But I don't see the two libraries as competing. It seems like |
That was exacttly what I wanted: to combine the Layout this library provides with Elasticsearch Appender. For additional safety, the users can use additional appenders, such as rolling log file, but there is value in offering them a choice to bypass filebeat and go to ES directly. What is needed to make it work is making sure the appender can receive the fields from the layout definition to push the data into ES. Hope it makes sense. |
In theory, that should not require any additional effort. But I haven't tested the two in combination yet. |
I've summarized some of the advantages in #8 |
Tried it and it didn't work. I don't think I fully grasp how appenders and layout interact. This is the config I tried:
My dependencies:
The error I am getting:
|
That's an exception related to I have not tried it out but the configuration would look something like this: <Elasticsearch name="elasticsearchAsyncBatch">
<EcsLayout serviceName="my-app" />
<IndexName indexName="log4j2" />
<AsyncBatchDelivery>
<IndexTemplate name="log4j2" path="classpath:indexTemplate.json" />
<JestHttp serverUris="http://localhost:9200" />
</AsyncBatchDelivery>
</Elasticsearch> Each appender needs a layout which formats a log event into a String. The appender is only responsible for, well, appending that string to a specific output, like a file, or Elasticsearch. |
We currently use logback for logging. As our applications are running on HP-UX we cannot use FileBeat to read the logfiles and we cannot use LogStash either(JNI native library). So we currently are between a rock and a hard place... The only way currently available would be to use the SyslogAppender with LogStash on the ElasticStack server being the Syslog server. With this issue here being solved it would be a lot easier for us... |
Following is working for me.
However removal of Having this line attached issues an error:
|
As @felixbarny said - @michaelhyatt @wolframhaussig If you'd like to use this layout, you can use it with @Marx2 I've just tested @xeraa Addressing some of your concerns:
|
FYI: we're planning to add support in the Elastic APM Java agent to automatically ship the application logs: elastic/apm#252 TL;DR: The agent would create a "shadow log file" for every of the application's log file. That file will then be tailed by the agent and sent to the APM Server which does some processing, like adding metadata to the log events and sends them to Elasticsearch. |
@felixbarny Thank you so much for your work! I know this might take a while but if you need a tester just send me a note and I would be happy to give it a try. |
Closing this as there are currently no plans to provide appenders that write to Elasticsearch. But there are the alternatives of using with 3rd party appenders and there are plans to let APM agents ship the logs. |
It should be possible to use appenders to log directly into Elasticsearch without even going through Filebeat using something, like this one:
https://github.com/rfoltyns/log4j2-elasticsearch
The text was updated successfully, but these errors were encountered: