-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Odd logs maybe potential bugs in 3.7.0 trying to wrap my head didn't see on 2.8.x #13197
Comments
Thanks for your report, we also already noticed it and are investigating. |
We can easily reproduce it by common case
HTTP request was sent to the HTTPS port, and 400 was triggered in the TLS handshake process, it does not initialize any nginx variable, and we can not get the |
@oowl I think my case is similar to the http call to an https endpoint but mine was essentially a loadbalancer calling its health check endpoint to a port expecting client mtls that could not serve a client certificate so nginx rejects the call early for no client cert passed. similar early nginx termination of the tx for not meeting tls needs similar to your http protocol mismatch.
Will this be backported to 3.7.0 or end up in a 3.7.1? For now I will drop in some patch files with your suggested fixes and try it out. In looking at the fix I notice you opted to just move the @chronolaw I also saw a different error too w respect to my original post, the
It likely may be related similarly due to short circuited txs though but it prints w the globalpatches lua file line. |
So @chronolaw @oowl I dropped in the nginx template change to move the no warning on the uninit vars + patched the reports.lua file and that did fix one of the log issues I saw. Still getting this log though on both my server blocks:
Studying this code here from global patches:
Seems basically Kong OOTB now limits req/resp headers to 100 elements each? Thats about on par with most load balancer defaults too from what I have seen from a safety standpoint. BUT the kicker here is I am fairly confident that there were less than 100 request headers to that ping endpoint healthcheck call.... so something about transactions cut short by nginx causing this behavior too in logging plugins? Does this global patches just execute with every plugin? Its odd to me that the http-log plugin isn't printing this also on those txs only my kong-kafka-log plugin. https://github.com/Optum/kong-kafka-log Another one of those behaviors I haven't observed on 2.8.x |
Darn nope, tried a new route defined without the request termination plugin and it still threw that log, so that plugin has nothing to do with it:
Gonna just disable the kong-kafka-log plugin all together and see if that stops the global patches behavior when it runs too. Edit - yep with that plugin disabled when I hit the /F5/status proxy healthcheck I don't see the global patches header truncate logging issue. Maybe I can patch global patches lua to give me a print statement of how many headers it thinks it sees and re-enable the kong-kafka-log plugin? |
In the current fix, I think we do not backport to the old version.
Yeah, it will let us not realize this error, but from my perspective, it's not a problem in Kong, even if we know this error log, we still do not see what we can do. |
@chronolaw @oowl Would you like a separate git issue for the other log issue I mentioned seeing with Kong interacting badly in the core code with one of my OSS plugins when its enabled? I finally added some debug statements to the global patches lua file and confirmed there is some weird behavior up in the 3.7.0 kong verse here hah: Code change to help debug was this:
Output of the above to help debug was this:
So I think the real issue is why the heck was Edit - Ohhhh is it because of this? https://github.com/Optum/kong-kafka-log/blob/master/src/basic.lua#L146 ???? This logic used to work fine using ngx.req.get_headers(1)["optum-cid-ext"] before on 2.8.x ... Does the global patches now override that default nginx header grabbing behavior to cause that issue? Did Kong ever note this in its change log somewhere? I think thats the issue that Kong now overrides that ngx method globally which changed the methods behavior :/ ... Edit Edit - Per docs it seems like to grab 1 header no arg is even needed inside at all now, maybe it even changed at openresty layer I guess because I don't want that to log "truncating headers" every time. I can update plugin: Mystery solved. |
^ will say I don't agree w notice log for the truncate headers kong does w global patches there either when I just wanna grab a singular header value for optimization. but I removed the (1) and that made the log go away even though I suppose the query is more intensive after doing so. |
#13201) When Kong receives abnormal traffic, it will trigger 400 responses without initializing any Nginx variable, So it will trigger report.lua error, which is unnecessary. eg: send HTTP traffic to HTTPS port, Nginx will finalize the current request by 400 response in the TLS handshake, So it will never call any openresty HTTP processing phase, it also does not initialize any Nginx variable. Fix #13197 https://konghq.atlassian.net/browse/FTI-6025
#13201) When Kong receives abnormal traffic, it will trigger 400 responses without initializing any Nginx variable, So it will trigger report.lua error, which is unnecessary. eg: send HTTP traffic to HTTPS port, Nginx will finalize the current request by 400 response in the TLS handshake, So it will never call any openresty HTTP processing phase, it also does not initialize any Nginx variable. Fix #13197 https://konghq.atlassian.net/browse/FTI-6025
Is there an existing issue for this?
Kong version (
$ kong version
)3.7.0
Current Behavior
Logs like this are popping up in 1 of my 2 dev environments:
Expected Behavior
No logs like this ideally.
I see upstream_uri present all around in the template and in my kong_mtls block that is basically a dup of the typical kong server block other than the fact is also requires mtls client comms ingress. My first thought was maybe when yall implemented this bit
It only works for the kong server block and not for custom templates that maybe duplicate it twice essentially? But its in the http block so I imagined it would apply anything kong specific to both server blocks... and my kong_mtls block is identical to my kong server block with respect to where upstream_uri shows as present so I am not sure why it throws that.
The other line seems to invoke on the global patches lua file of kong here and I never recall that line comming up on Kong 2.8.x with respect to running this plugin:
https://github.com/Kong/kong/blob/3.7.0/kong/globalpatches.lua#L147
Is it like when a plugin sees a header too big or too many headers it throws that log statement? Is there some new limiter to header sizes being applied perhaps snagging me here? Relevant plugin https://github.com/Optum/kong-kafka-log
Worth noting the
/F5/status
calls are just GET requests with no bodies essentially very plain health check calls. That endpoint runs the request termination plugin on the route resource.Steps To Reproduce
Hard to say, including my nginx custom template here for speculation:
Anything else?
No response
The text was updated successfully, but these errors were encountered: