You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It might be happening from Headers.
we can convert headers into a string field and use it as index would resolve this issue
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[node-1][10.132.22.96:9300][indices:data/write/update[s]]"}],"type":"illegal_argument_exception","reason":"Limit of total fields [1000] in index [ache-data] has been exceeded"}
but please make sure it doesnt change the kafka format
The text was updated successfully, but these errors were encountered:
I think you mean to not change the ELASTIC format in KAFKA, and if so, that is not possible. I can't change one without changing the other since they are serialized using the same code.
We will need to fix this and change the format before the next version release. For now, changing Elasticsearch setting to increase the index.mapping.total_fields.limit might work as a temporary workaround:
It might be happening from Headers.
we can convert headers into a string field and use it as index would resolve this issue
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[node-1][10.132.22.96:9300][indices:data/write/update[s]]"}],"type":"illegal_argument_exception","reason":"Limit of total fields [1000] in index [ache-data] has been exceeded"}
but please make sure it doesnt change the kafka format
The text was updated successfully, but these errors were encountered: