-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add logs-apm-* and metric-apm-* read permissions for kibana_system role #101467
Conversation
Pinging @elastic/es-security (Team:Security) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -127,7 +127,9 @@ static RoleDescriptor kibanaSystem(String name) { | |||
// APM telemetry queries APM indices in kibana task runner | |||
RoleDescriptor.IndicesPrivileges.builder().indices("apm-*").privileges("read", "read_cross_cluster").build(), | |||
RoleDescriptor.IndicesPrivileges.builder().indices("logs-apm.*").privileges("read", "read_cross_cluster").build(), | |||
RoleDescriptor.IndicesPrivileges.builder().indices("logs-apm-*").privileges("read", "read_cross_cluster").build(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding logs-apm.*
vs logs-apm-*
and metrics-apm.*
vs metrics-apm-*
, do we need both? What is the difference between indices that have a .
vs a -
? And would logs-apm*
and metrics-apm*
work, or is this too broad?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we at least need to provide permissions to metrics-apm-*
(see elastic/kibana#170031 (comment)).
@elastic/apm-server Perhaps you can chime in here. Is it reasonable to grant access to metrics-apm-*
in addition to metrics-apm.*
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since @sqren found out that these permissions are indeed required, and, considering that these indexes are configurable by the user, I would recommend to give kibana/system role access to metrics-apm*
, traces-apm*
, and logs-apm*
as suggested earlier by @jeramysoucy.
I can modify the PR if that's something you would be fine with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, the issue here is related to using non-standard data stream names (elastic/kibana#170031 (comment)). If that's correct, then should we really be making changes to standard roles? I think customising one should probably require customising the other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, the issue here is related to using non-standard data stream names (elastic/kibana#170031 (comment)). If that's correct, then should we really be making changes to standard roles? I think customising one should probably require customising the other.
I second this. We like to be conservative with making any changes to the kibana_system
role, especially concerning indices that are not explicitly internal or owned by kibana, or if there is potential for granted access to overlap with user indices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@axw given that proposed PR adds permissions to read data streams of the standard structure that you mentioned, do you think it makes sense to merge it, or are there any other objections I haven't thought about?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The structure matches, my point was that the first and second parts (e.g. in the case of metrics-apm-*
, I mean metrics
and apm
) are not intended to be changed. Changing them breaks things, because the privileges do not cater for that.
APM does not produce any data streams that match the patterns metrics-apm-*
or logs-apm-*
. There is a data stream that matches traces-apm-*
, hence why that has a different pattern in kibana_system
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@axw in this case, what would be your suggestion on naming aliases if following this documentation? https://www.elastic.co/guide/en/kibana/current/apm-spaces.html
Changing it to something like metrics-apm.production-default
would work with the current kibana_system
privileges, but wouldn't it be more incidental given that typical APM data stream places other predefined things in place of production
(e.g. metrics-apm.service_destination.1m-default
).
I guess if following data stream naming convention it would have to be something like metrics-apm.environment-production
or metrics-apm.alias-production
? Since production
in this case is used more for access control (namespace).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@axw in this case, what would be your suggestion on naming aliases if following this documentation? https://www.elastic.co/guide/en/kibana/current/apm-spaces.html
Changing it to something like metrics-apm.production-default would work with the current kibana_system privileges, but wouldn't it be more incidental given that typical APM data stream places other predefined things in place of production (e.g. metrics-apm.service_destination.1m-default).
My recommendation would be to put the environment in the namespace, so e.g. you may have metrics-apm.all-production
. Or metrics-apm.alias-production
as you suggested.
Sorry, I see where you're coming from now -- that additional ".all" or ".alias" feels a bit arbitrary. OTOH it's probably a good idea to name the alias something that won't ever be likely to collide with with a data stream name.
Anyway, per Dario above it sounds like the answer is that we have a bug in the UI and if that were fixed it would make this change unnecessary. Then you should be able to call the alias whatever you like, assuming your user role has necessary privileges.
Here is my attempt at following the trace of this index access, but keep in mind that it's my first look at kibana code, so you might want to double check it: elastic/kibana#170031 (comment) |
I have opened elastic/kibana#173001. It should be able to replace this PR and I'll close this as soon as the other one is approved. |
@sqren this is great, thanks for looking into it! |
…s metrics (#173001) Closes: #170031 Replaces: elastic/elasticsearch#101467 **Problem** We need to know if an agent config has been applied at the edge (by APM agents). This is determined by comparing the etag (hash) of the config, with the etag applied at the edges. Previously the agent config itself contained this information (`config.applied_by_agent`) but when running with fleet this will instead be captured in `agent_config` metric documents. Currently the internal kibana user retrieves the `agent_config` metric documents from the APM metric index (`metrics-apm-*` by default). This index is configurable by the end-user so can be changed to something the internal user doesn't have access to. This is a problem. **Solution** This PR replaces the calls made by the internal client with calls made by the authenticated end user (via `APMEventClient`). This approach works for requests made from the browser/UI but doesn't work for background tasks made by fleet. To work around this we only conditionally query the metric index if the `APMEventClient` is available. If `APMEventClient` is not available `applied_by_agent` will be `undefined`
Replaced by elastic/kibana#173001 |
…s metrics (elastic#173001) Closes: elastic#170031 Replaces: elastic/elasticsearch#101467 **Problem** We need to know if an agent config has been applied at the edge (by APM agents). This is determined by comparing the etag (hash) of the config, with the etag applied at the edges. Previously the agent config itself contained this information (`config.applied_by_agent`) but when running with fleet this will instead be captured in `agent_config` metric documents. Currently the internal kibana user retrieves the `agent_config` metric documents from the APM metric index (`metrics-apm-*` by default). This index is configurable by the end-user so can be changed to something the internal user doesn't have access to. This is a problem. **Solution** This PR replaces the calls made by the internal client with calls made by the authenticated end user (via `APMEventClient`). This approach works for requests made from the browser/UI but doesn't work for background tasks made by fleet. To work around this we only conditionally query the metric index if the `APMEventClient` is available. If `APMEventClient` is not available `applied_by_agent` will be `undefined` (cherry picked from commit 58c7958)
…s metrics (elastic#173001) Closes: elastic#170031 Replaces: elastic/elasticsearch#101467 **Problem** We need to know if an agent config has been applied at the edge (by APM agents). This is determined by comparing the etag (hash) of the config, with the etag applied at the edges. Previously the agent config itself contained this information (`config.applied_by_agent`) but when running with fleet this will instead be captured in `agent_config` metric documents. Currently the internal kibana user retrieves the `agent_config` metric documents from the APM metric index (`metrics-apm-*` by default). This index is configurable by the end-user so can be changed to something the internal user doesn't have access to. This is a problem. **Solution** This PR replaces the calls made by the internal client with calls made by the authenticated end user (via `APMEventClient`). This approach works for requests made from the browser/UI but doesn't work for background tasks made by fleet. To work around this we only conditionally query the metric index if the `APMEventClient` is available. If `APMEventClient` is not available `applied_by_agent` will be `undefined` (cherry picked from commit 58c7958)
…s metrics (elastic#173001) Closes: elastic#170031 Replaces: elastic/elasticsearch#101467 **Problem** We need to know if an agent config has been applied at the edge (by APM agents). This is determined by comparing the etag (hash) of the config, with the etag applied at the edges. Previously the agent config itself contained this information (`config.applied_by_agent`) but when running with fleet this will instead be captured in `agent_config` metric documents. Currently the internal kibana user retrieves the `agent_config` metric documents from the APM metric index (`metrics-apm-*` by default). This index is configurable by the end-user so can be changed to something the internal user doesn't have access to. This is a problem. **Solution** This PR replaces the calls made by the internal client with calls made by the authenticated end user (via `APMEventClient`). This approach works for requests made from the browser/UI but doesn't work for background tasks made by fleet. To work around this we only conditionally query the metric index if the `APMEventClient` is available. If `APMEventClient` is not available `applied_by_agent` will be `undefined` (cherry picked from commit 58c7958)
…s metrics (elastic#173001) Closes: elastic#170031 Replaces: elastic/elasticsearch#101467 **Problem** We need to know if an agent config has been applied at the edge (by APM agents). This is determined by comparing the etag (hash) of the config, with the etag applied at the edges. Previously the agent config itself contained this information (`config.applied_by_agent`) but when running with fleet this will instead be captured in `agent_config` metric documents. Currently the internal kibana user retrieves the `agent_config` metric documents from the APM metric index (`metrics-apm-*` by default). This index is configurable by the end-user so can be changed to something the internal user doesn't have access to. This is a problem. **Solution** This PR replaces the calls made by the internal client with calls made by the authenticated end user (via `APMEventClient`). This approach works for requests made from the browser/UI but doesn't work for background tasks made by fleet. To work around this we only conditionally query the metric index if the `APMEventClient` is available. If `APMEventClient` is not available `applied_by_agent` will be `undefined` (cherry picked from commit 58c7958)
Fixes elastic/kibana#170031