You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I put ridiculous values in for authentication tokens on the S3 input and attempt to run Logstash, it causes the entire pipeline to fail. See (truncated) logs below.
Apr 29 00:35:17 esx-devbox logstash: [2019-04-29T00:35:17,730][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>Aws::S3::Errors::Forbidden, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/aws-sdk-core-2.11.236/lib/seahorse/client/plugins/raise_response_errors.rb:15:in"
To me, it would make more sense to log errors than it does to abort the entire pipeline. We have many other things running in our "main" pipeline and don't want a simple auth error to cause the whole thing to come crashing down. What if AWS had a short outage?
Thanks for your time,
Nick
The text was updated successfully, but these errors were encountered:
Logstash Version: logstash-6.7.1-1.noarch (RPM)
If I put ridiculous values in for authentication tokens on the S3 input and attempt to run Logstash, it causes the entire pipeline to fail. See (truncated) logs below.
Apr 29 00:35:06 esx-devbox logstash: [2019-04-29T00:35:06,233][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::S3 bucket=>\"asdfsadfsd\", access_key_id=>\"asdfasdfsdf\", backup_to_bucket=>\"sdfsdsdfdf\", codec=><LogStash::Codecs::CloudTrail id=>\"cloudtrail_a5a6ba3a-5237-4663-affc-35f57f5a9aaa\",
Apr 29 00:35:17 esx-devbox logstash: [2019-04-29T00:35:17,730][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>Aws::S3::Errors::Forbidden, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/aws-sdk-core-2.11.236/lib/seahorse/client/plugins/raise_response_errors.rb:15:in"
To me, it would make more sense to log errors than it does to abort the entire pipeline. We have many other things running in our "main" pipeline and don't want a simple auth error to cause the whole thing to come crashing down. What if AWS had a short outage?
Thanks for your time,
Nick
The text was updated successfully, but these errors were encountered: