Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add logs-apm-* and metric-apm-* read permissions for kibana_system role #101467

Closed
wants to merge 1 commit into from

Conversation

uvNikita
Copy link

@uvNikita uvNikita requested a review from a team as a code owner October 27, 2023 14:39
@elasticsearchmachine elasticsearchmachine added needs:triage Requires assignment of a team area label v8.12.0 external-contributor Pull request authored by a developer outside the Elasticsearch team labels Oct 27, 2023
@gbanasiak gbanasiak added the :Security/Authorization Roles, Privileges, DLS/FLS, RBAC/ABAC label Oct 30, 2023
@elasticsearchmachine elasticsearchmachine added Team:Security Meta label for security team and removed needs:triage Requires assignment of a team area label labels Oct 30, 2023
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-security (Team:Security)

Copy link

@jeramysoucy jeramysoucy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I think read access to these indices should be fine. I am also curious on thoughts regarding @sqren's comment. Is there a definitive reason why access to these indices is required to get the agent config?

@@ -127,7 +127,9 @@ static RoleDescriptor kibanaSystem(String name) {
// APM telemetry queries APM indices in kibana task runner
RoleDescriptor.IndicesPrivileges.builder().indices("apm-*").privileges("read", "read_cross_cluster").build(),
RoleDescriptor.IndicesPrivileges.builder().indices("logs-apm.*").privileges("read", "read_cross_cluster").build(),
RoleDescriptor.IndicesPrivileges.builder().indices("logs-apm-*").privileges("read", "read_cross_cluster").build(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding logs-apm.* vs logs-apm-* and metrics-apm.* vs metrics-apm-*, do we need both? What is the difference between indices that have a . vs a -? And would logs-apm* and metrics-apm* work, or is this too broad?

Copy link
Member

@sorenlouv sorenlouv Nov 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we at least need to provide permissions to metrics-apm-* (see elastic/kibana#170031 (comment)).

@elastic/apm-server Perhaps you can chime in here. Is it reasonable to grant access to metrics-apm-* in addition to metrics-apm.*?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since @sqren found out that these permissions are indeed required, and, considering that these indexes are configurable by the user, I would recommend to give kibana/system role access to metrics-apm*, traces-apm*, and logs-apm* as suggested earlier by @jeramysoucy.

I can modify the PR if that's something you would be fine with.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the issue here is related to using non-standard data stream names (elastic/kibana#170031 (comment)). If that's correct, then should we really be making changes to standard roles? I think customising one should probably require customising the other.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the issue here is related to using non-standard data stream names (elastic/kibana#170031 (comment)). If that's correct, then should we really be making changes to standard roles? I think customising one should probably require customising the other.

I second this. We like to be conservative with making any changes to the kibana_system role, especially concerning indices that are not explicitly internal or owned by kibana, or if there is potential for granted access to overlap with user indices.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@axw given that proposed PR adds permissions to read data streams of the standard structure that you mentioned, do you think it makes sense to merge it, or are there any other objections I haven't thought about?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The structure matches, my point was that the first and second parts (e.g. in the case of metrics-apm-*, I mean metrics and apm) are not intended to be changed. Changing them breaks things, because the privileges do not cater for that.

APM does not produce any data streams that match the patterns metrics-apm-* or logs-apm-*. There is a data stream that matches traces-apm-*, hence why that has a different pattern in kibana_system.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@axw in this case, what would be your suggestion on naming aliases if following this documentation? https://www.elastic.co/guide/en/kibana/current/apm-spaces.html

Changing it to something like metrics-apm.production-default would work with the current kibana_system privileges, but wouldn't it be more incidental given that typical APM data stream places other predefined things in place of production (e.g. metrics-apm.service_destination.1m-default).

I guess if following data stream naming convention it would have to be something like metrics-apm.environment-production or metrics-apm.alias-production? Since production in this case is used more for access control (namespace).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@uvNikita IMHO the fact that we use the kibana_system role for a user feature is the bug we need to fix. cc @smith

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@axw in this case, what would be your suggestion on naming aliases if following this documentation? https://www.elastic.co/guide/en/kibana/current/apm-spaces.html

Changing it to something like metrics-apm.production-default would work with the current kibana_system privileges, but wouldn't it be more incidental given that typical APM data stream places other predefined things in place of production (e.g. metrics-apm.service_destination.1m-default).

My recommendation would be to put the environment in the namespace, so e.g. you may have metrics-apm.all-production. Or metrics-apm.alias-production as you suggested.

Sorry, I see where you're coming from now -- that additional ".all" or ".alias" feels a bit arbitrary. OTOH it's probably a good idea to name the alias something that won't ever be likely to collide with with a data stream name.

Anyway, per Dario above it sounds like the answer is that we have a bug in the UI and if that were fixed it would make this change unnecessary. Then you should be able to call the alias whatever you like, assuming your user role has necessary privileges.

@uvNikita
Copy link
Author

I am also curious on thoughts regarding @sqren's comment. Is there a definitive reason why access to these indices is required to get the agent config?

Here is my attempt at following the trace of this index access, but keep in mind that it's my first look at kibana code, so you might want to double check it: elastic/kibana#170031 (comment)

@sorenlouv
Copy link
Member

I have opened elastic/kibana#173001. It should be able to replace this PR and I'll close this as soon as the other one is approved.

@uvNikita
Copy link
Author

@sqren this is great, thanks for looking into it!

sorenlouv added a commit to elastic/kibana that referenced this pull request Dec 11, 2023
…s metrics (#173001)

Closes: #170031
Replaces: elastic/elasticsearch#101467 


**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`
@sorenlouv
Copy link
Member

Replaced by elastic/kibana#173001

@sorenlouv sorenlouv closed this Dec 11, 2023
sorenlouv added a commit to sorenlouv/kibana that referenced this pull request Dec 11, 2023
…s metrics (elastic#173001)

Closes: elastic#170031
Replaces: elastic/elasticsearch#101467

**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`

(cherry picked from commit 58c7958)
@uvNikita uvNikita deleted the kibana_system/metrics-logs branch December 11, 2023 12:14
sorenlouv added a commit to kibanamachine/kibana that referenced this pull request Dec 11, 2023
…s metrics (elastic#173001)

Closes: elastic#170031
Replaces: elastic/elasticsearch#101467

**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`

(cherry picked from commit 58c7958)
sorenlouv added a commit to sorenlouv/kibana that referenced this pull request Dec 11, 2023
…s metrics (elastic#173001)

Closes: elastic#170031
Replaces: elastic/elasticsearch#101467

**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`

(cherry picked from commit 58c7958)
sorenlouv pushed a commit to sorenlouv/kibana that referenced this pull request May 1, 2024
…s metrics (elastic#173001)

Closes: elastic#170031
Replaces: elastic/elasticsearch#101467

**Problem**
We need to know if an agent config has been applied at the edge (by APM
agents). This is determined by comparing the etag (hash) of the config,
with the etag applied at the edges.

Previously the agent config itself contained this information
(`config.applied_by_agent`) but when running with fleet this will
instead be captured in `agent_config` metric documents.

Currently the internal kibana user retrieves the `agent_config` metric
documents from the APM metric index (`metrics-apm-*` by default). This
index is configurable by the end-user so can be changed to something the
internal user doesn't have access to. This is a problem.

**Solution**

This PR replaces the calls made by the internal client with calls made
by the authenticated end user (via `APMEventClient`). This approach
works for requests made from the browser/UI but doesn't work for
background tasks made by fleet. To work around this we only
conditionally query the metric index if the `APMEventClient` is
available.
If `APMEventClient` is not available `applied_by_agent` will be
`undefined`

(cherry picked from commit 58c7958)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
external-contributor Pull request authored by a developer outside the Elasticsearch team :Security/Authorization Roles, Privileges, DLS/FLS, RBAC/ABAC Team:Security Meta label for security team v8.13.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

kibana_system role missing read permissions for metrics-apm-* and logs-apm-*
8 participants