Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: dynamic matcher+template checks #181

Open
wants to merge 28 commits into
base: main
Choose a base branch
from

Conversation

alex5517
Copy link

Removes the hardcoded checks on instance and job
Adds option to provide matchers that are required to be present in all expr's
if one or more of these matchers use variables, then also check if template exists.

Defaults to checking instance and job so that it almost works as usual.

Also makes the matchers check fixable, implemented same way as in: #180
Just created it in 2 PRs in case this PR could not be merged.

Copy link
Collaborator

@rgeyer rgeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alex5517 thank you for this! I've had something similar in mind for a long time, but apparently never created an issue, nor documented it anywhere.

I have a few comments below that I'd like to see implemented, or I'm happy to discuss with you.

These stem from my own experience in stubbing my toe by leaving existing users of the tool in a hard spot trying to integrate the new version of the linter into their workflow. This is used in enough places for CI that making breaking changes, particularly without versioning, can really disrupt downstream users.

docs/index.md Outdated
-c, --config string path to a configuration file
--fix automatically fix problems if possible
-h, --help help for lint
-m, --matcher stringArray matcher required to be present in all selectors, e.g. 'instance=~"$instance"' or 'cluster="$cluster"', can be specified multiple times (default ["instance=~""$instance""","job=~""$job"""])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to see this configured in the .lint config file rather than as a CLI flag. In the case of linting several dashboards in a repo, or when embedding into something like mixtool that makes this a bit more flexible, and can be different per dashboard-project.

Something like;

---
settings:
  template-variable-matchers-rule:
    matchers:
    - instance=~""
    - instance=""
    - job=~""

docs/index.md Show resolved Hide resolved
docs/index.md Outdated Show resolved Hide resolved
@alex5517
Copy link
Author

@rgeyer,

I made a quick commit with what i had since i ran out of time...
But wanted to get it out so you could see it and let me know if the changes align with your thoughts.

@rgeyer
Copy link
Collaborator

rgeyer commented Jul 11, 2024

@rgeyer,

I made a quick commit with what i had since i ran out of time... But wanted to get it out so you could see it and let me know if the changes align with your thoughts.

Looking good so far!

I was suggesting that the new rule be enabled, only if config exists for it, but it looks like you're beginning to implement an experimental flag which is great. I wrote up an issue for this yesterday in #185. You certainly don't need to implement the experimental flag, nor the features described there, but it may be a good reference.

You said this was a quick commit. What do you think you have left to do? I didn't do a deep review because you indicated this was a work in progress.

Also, a nit, but a lot of files had their modes changed from 0644 to 0755 which is making the PR look larger than it is.

@alex5517 alex5517 requested a review from rgeyer July 12, 2024 10:24
@alex5517
Copy link
Author

@rgeyer,

I made a few changes, and added a comment to #185
Let me know what you think.

The PR is also not ready to be merged... I noticed an issue that needs fixing, since i introduced fixable error on matchers, i can see that i am missing some code to reverse the expanded Grafana variables...

Copy link
Collaborator

@rgeyer rgeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! I know the PR isn't quite ready to merge, but I went over it again as-is and spotted a few things.

Thank you for being flexible and implementing the feedback!

lint/rule_target_required_matchers.go Outdated Show resolved Hide resolved
lint/rule_target_required_matchers.go Show resolved Hide resolved
lint/rule_template_required_variables.go Outdated Show resolved Hide resolved
@alex5517
Copy link
Author

alex5517 commented Sep 2, 2024

@rgeyer

Sorry for the inactivity...

I had a look at the comments you left, and implemented them.
I am still missing the reversal of Grafana variables so that $__rate_interval does not get changed to 102d15h53m10s787ms

@alex5517
Copy link
Author

alex5517 commented Sep 2, 2024

Hi @rgeyer,

I have been trying to understand how the variable expansion should work, but i have not been able to fully get it, and currently i am not sure if all of the expansions are working as expected.

To get some debug info i tried printing the expr at different stages. https://github.com/grafana/dashboard-linter/blob/main/lint/variables_test.go

The input expr is the string that is provided to func expandVariables(expr string, ...
The expanded expr is the string returned from func expandVariables...
The parsed expanded expr is the expanded expr parsed with parser.ParseExpr()

It might be that i am just missing something?

test desc: Should not replace variables in quoted strings
input expr: up{job=~"$job"}
expanded expr: up{job=~"$job"}
parsed expanded expr: up{job=~"$job"}

test desc: Should replace variables in metric name
input expr: up$var{job=~"$job"}
expanded expr: upvar{job=~"$job"}
parsed expanded expr: upvar{job=~"$job"}

test desc: Should replace global rate/range variables
input expr: rate(metric{}[$__rate_interval])
expanded expr: rate(metric{}[8869990787ms])
parsed expanded expr: rate(metric[102d15h53m10s787ms])

test desc: Should support ${...} syntax
input expr: rate(metric{}[${__rate_interval}])
expanded expr: rate(metric{}[8869990787ms])
parsed expanded expr: rate(metric[102d15h53m10s787ms])

test desc: Should support [[...]] syntax
input expr: rate(metric{}[[[__rate_interval]]])
expanded expr: rate(metric{}[8869990787ms])
parsed expanded expr: rate(metric[102d15h53m10s787ms])

test desc: Should support ${__user.id}
input expr: sum(http_requests_total{method="GET"} @ ${__user.id})
expanded expr: sum(http_requests_total{method="GET"} @ 42)
parsed expanded expr: sum(http_requests_total{method="GET"} @ 42.000)

test desc: Should support $__from/$__to
input expr: sum(http_requests_total{method="GET"} @ $__from)
expanded expr: sum(http_requests_total{method="GET"} @ 1594671549254)
parsed expanded expr: sum(http_requests_total{method="GET"} @ 1594671549254.000)

test desc: Should support $__from/$__to with formatting option (unix seconds)
input expr: sum(http_requests_total{method="GET"} @ ${__from:date:seconds}000)
expanded expr: sum(http_requests_total{method="GET"} @ 1594671549000)
parsed expanded expr: sum(http_requests_total{method="GET"} @ 1594671549000.000)

test desc: Should support $__from/$__to with formatting option (iso default)
input expr: sum(http_requests_total{method="GET"} @ ${__from:date})
expanded expr: sum(http_requests_total{method="GET"} @ 2020-07-13T20:19:09Z)
parsed expanded expr: sum(%!s(<nil>))

test desc: Should support $__from/$__to with formatting option (iso)
input expr: sum(http_requests_total{method="GET"} @ ${__from:date:iso})
expanded expr: sum(http_requests_total{method="GET"} @ 2020-07-13T20:19:09Z)
parsed expanded expr: sum(%!s(<nil>))

test desc: Should support ${variable:csv} syntax
input expr: max by(${variable:csv}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(variable,variable,variable) (rate(cpu{}[8869990787ms]))
parsed expanded expr: max by (variable, variable, variable) (rate(cpu[102d15h53m10s787ms]))

test desc: Should support ${variable:doublequote} syntax
input expr: max by(${variable:doublequote}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by("variable","variable","variable") (rate(cpu{}[8869990787ms]))
parsed expanded expr: max by (, , ) (rate(cpu[102d15h53m10s787ms]))

test desc: Should support ${variable:glob} syntax
input expr: max by(${variable:glob}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by({variable,variable,variable}) (rate(cpu{}[8869990787ms]))
parsed expanded expr: max by (, variable, variable) (rate(cpu[102d15h53m10s787ms]))

test desc: Should support ${variable:json} syntax
input expr: max by(${variable:json}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(["variable","variable","variable"]) (rate(cpu{}[8869990787ms]))
parsed expanded expr: nil

test desc: Should support ${variable:lucene} syntax
input expr: max by(${variable:lucene}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(("variable" OR "variable" OR "variable")) (rate(cpu{}[8869990787ms]))
parsed expanded expr: max(%!s(<nil>))

test desc: Should support ${variable:percentencode} syntax
input expr: max by(${variable:percentencode}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(variable%2Cvariable%2Cvariable) (rate(cpu{}[8869990787ms]))
parsed expanded expr: nil

test desc: Should support ${variable:pipe} syntax
input expr: max by(${variable:pipe}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(variable|variable|variable) (rate(cpu{}[8869990787ms]))
parsed expanded expr: nil

test desc: Should support ${variable:raw} syntax
input expr: max by(${variable:raw}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(variable,variable,variable) (rate(cpu{}[8869990787ms]))
parsed expanded expr: max by (variable, variable, variable) (rate(cpu[102d15h53m10s787ms]))

test desc: Should support ${variable:regex} syntax
input expr: max by(${variable:regex}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(variable|variable|variable) (rate(cpu{}[8869990787ms]))
parsed expanded expr: nil

test desc: Should support ${variable:singlequote} syntax
input expr: max by(${variable:singlequote}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by('variable','variable','variable') (rate(cpu{}[8869990787ms]))
parsed expanded expr: max by (, , ) (rate(cpu[102d15h53m10s787ms]))

test desc: Should support ${variable:sqlstring} syntax
input expr: max by(${variable:sqlstring}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by('variable','variable','variable') (rate(cpu{}[8869990787ms]))
parsed expanded expr: max by (, , ) (rate(cpu[102d15h53m10s787ms]))

test desc: Should support ${variable:text} syntax
input expr: max by(${variable:text}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(variable + variable + variable) (rate(cpu{}[8869990787ms]))
parsed expanded expr: max by (variable) (rate(cpu[102d15h53m10s787ms]))

test desc: Should support ${variable:queryparam} syntax
input expr: max by(${variable:queryparam}) (rate(cpu{}[$__rate_interval]))
expanded expr: max by(var-variable=variable&var-variable=variable&var-variable=variable) (rate(cpu{}[8869990787ms]))
parsed expanded expr: nil

test desc:
input expr: value
expanded expr: value
parsed expanded expr: value

test desc:
input expr: 4h
expanded expr: 4h
parsed expanded expr: nil

test desc:
input expr: 5m
expanded expr: 5m
parsed expanded expr: nil

test desc: Should replace variables present in the templating
input expr: max by($var) (rate(cpu{}[$interval:$resolution]))
expanded expr: max by(value) (rate(cpu{}[4h:5m]))
parsed expanded expr: max by (value) (rate(cpu[4h:5m]))

test desc:
input expr: $__auto_interval_interval
expanded expr: 10s
parsed expanded expr: nil

test desc: Should recursively replace variables
input expr: sum (rate(cpu{}[$interval]))
expanded expr: sum (rate(cpu{}[10s]))
parsed expanded expr: sum(rate(cpu[10s]))

test desc:
input expr: $__auto_interval
expanded expr: 10s
parsed expanded expr: nil

test desc: Should support plain $__auto_interval, generated by grafonnet-lib (https://github.com/grafana/grafonnet-lib/blob/master/grafonnet/template.libsonnet#L100)
input expr: sum (rate(cpu{}[$interval]))
expanded expr: sum (rate(cpu{}[10s]))
parsed expanded expr: sum(rate(cpu[10s]))

test desc:
input expr: $interval
expanded expr: interval
parsed expanded expr: interval

test desc: Should recursively replace variables, but not run into an infinite loop
input expr: sum (rate(cpu{}[$interval]))
expanded expr: sum (rate(cpu{}[interval]))
parsed expanded expr: nil

@alex5517 alex5517 requested a review from rgeyer September 4, 2024 13:56
@rgeyer
Copy link
Collaborator

rgeyer commented Sep 4, 2024

@rgeyer

Sorry for the inactivity...

I had a look at the comments you left, and implemented them. I am still missing the reversal of Grafana variables so that $__rate_interval does not get changed to 102d15h53m10s787ms

Not a problem! Welcome back from holiday. :)

I saw your commits and questions yesterday but didn't have a chance to review. Hopefully I will get through them today.

Thank you again for the contribution!

Copy link
Collaborator

@rgeyer rgeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! See my responses to your question about the tests and variable expansion above.

@rgeyer
Copy link
Collaborator

rgeyer commented Sep 4, 2024

Hi @rgeyer,

I have been trying to understand how the variable expansion should work, but i have not been able to fully get it, and currently i am not sure if all of the expansions are working as expected.

To get some debug info i tried printing the expr at different stages. https://github.com/grafana/dashboard-linter/blob/main/lint/variables_test.go

The input expr is the string that is provided to func expandVariables(expr string, ... The expanded expr is the string returned from func expandVariables... The parsed expanded expr is the expanded expr parsed with parser.ParseExpr()

It might be that i am just missing something?

This is, admittedly, confusing.

The goal of the variable expansion is to replace variables in a promql which the promql parser can not understand. In short, that means that any place a variable appears in a query, that is not in double quotes, some value has to be substituted in order for the promql parser to grok it.

This is almost entirely a concession for the global grafana variables which get used to define time ranges and other things which would not be considered a plain string value inside of double quotes.

The range variables are intentionally obscure large time intervals, because some tests need to check that the variable was substituted.

__rate_interval is a great example, there is a rule which checks to be sure that the __rate_interval was used for certain queries. The promql parser would have no idea what to do with rate(somemetric{}[$__rate_inteval]) because $__rate_interval is not a valid time range. In grafana, the promql is pre-parsed, and the appropriate value (something like 5m) is put in place, so that the prometheus backend can actually execute the query. We use a totally obscure value here, so that it is (statistically) impossible that a dashboard would ever intentionally use that static value. The comments are lost to time, but I actually rolled dice n number of times, and recorded the numbers to come up with the "random" numbers used here. See here, and here for reference.

@alex5517
Copy link
Author

alex5517 commented Sep 4, 2024

Hi @rgeyer,

Thanks for the explanation, i also figured that it was to get the promql parser to behave...

I think i will take another crack at it...

I wanna get nil issues fixed such as this one where the promql parser is not able to parse the expr properly.

test desc: Should support $__from/$__to with formatting option (iso default)
input expr: sum(http_requests_total{method="GET"} @ ${__from:date})
expanded expr: sum(http_requests_total{method="GET"} @ 2020-07-13T20:19:09Z)
parsed expanded expr: sum(%!s(<nil>))

These are some thoughts i have about changes that could be made to the variable expansion.

  1. To make it easier to revert back to the Grafana global variables, i think i would want to change the magic numbers into the format they are after being parsed by the promql parser, i.e. for __rate_interval the magic number is 8869990787ms and after it has been parsed by promql parser it is: 102d15h53m10s787ms
  2. Don't quote me on this... but i was thinking it would be easier to just do a regexp string replacement on the whole expr string given to func expandVariables()
rateIntervalRegexp = regexp.MustCompile("(\[\[|\$|\${)(__rate_interval)(\]\]|}|:.+}|)")
rateIntervalRegexp.ReplaceAllString(expr, rateIntervalMagicNumber)

Then reversing it would be replacing 102d15h53m10s787ms with ${__rate_interval}

But again these are just some unfinished thoughts that i wanted to share...

@alex5517
Copy link
Author

alex5517 commented Sep 9, 2024

Hi @rgeyer,

I have been working a bit on the issue with reversing the variable expansion, it is not 100% done...

I have made some assumptions...

  1. That it was not required to actually expand variables with format i.e. ${variable:csv} into a csv format as was done before.
  2. That __auto_interval did not need to be changed to 10s ?

And i am also aware of a few shortcomings that are present in the code atm but that should be fixable :)

  1. Support for "recursively replacing variables" such as https://github.com/grafana/dashboard-linter/blob/main/lint/variables_test.go#L175-L180
  2. I made it so that the __range_s and a few others are parsed with model.ParseDuration which makes it look like 211d12h44m22s63ms this means that it can not be used for: sum(rate(foo[$__rate_interval])) * $__range_s (I have an idea for fixing this)
  3. Missing support for query like: label_values(up{job=~\"$job\"}, namespace) (Did not work before but now it is checked.)

I hope it is okay to request review/feedback despite the fact that it is still not finished...
287dffd

@alex5517 alex5517 requested a review from rgeyer September 9, 2024 08:33
@rgeyer
Copy link
Collaborator

rgeyer commented Sep 9, 2024

I have made some assumptions...

  1. That it was not required to actually expand variables with format i.e. ${variable:csv} into a csv format as was done before.

This is fine, we're not doing anything with the special formats. If we ultimately do, we can make the necessary changes then.

  1. That __auto_interval did not need to be changed to 10s ?

Not specifically 10s no, but, it does need to be a time value so that the promql parser can process it.

I hope it is okay to request review/feedback despite the fact that it is still not finished... 287dffd

Not a problem at all, thank you for the continued iteration and clear communication.

@alex5517
Copy link
Author

Still missing a few places where tests are now failing due to promql parser errors being handled.

@alex5517
Copy link
Author

alex5517 commented Sep 11, 2024

@rgeyer,

All tests are now passing...

But i do see an issue with the merging of oldData and the newData when using autofix https://github.com/grafana/dashboard-linter/blob/main/main.go#L98-L115

Instead of merging the newData it appends it so that the result contains duplicate expressions, it must be that the conflate package is not able to merge them, based on my debugging the data provided to it looks correct.

@rgeyer
Copy link
Collaborator

rgeyer commented Nov 5, 2024

Instead of merging the newData it appends it so that the result contains duplicate expressions, it must be that the conflate package is not able to merge them, based on my debugging the data provided to it looks correct.

I believe that this is resolved by zeitlinger/conflate#1.

Do you have test data to try this with? I suppose I could knock together a pretty basic dashboard to test with.

I want to try to get this merged before main get's too far ahead.

Signed-off-by: Alexander Soelberg Heidarsson <[email protected]>
Signed-off-by: Alexander Soelberg Heidarsson <[email protected]>
@alex5517
Copy link
Author

alex5517 commented Nov 7, 2024

@rgeyer,

I tried testing using newest conflate: github.com/zeitlinger/conflate v0.0.0-20240927101413-c06be92f798f
It does still not merge correctly.
I tried testing with:
dashboard.json

{
  "editable": false,
  "links": [
     {
        "asDropdown": true,
        "includeVars": true,
        "keepTime": true,
        "tags": [
           "kubernetes-mixin"
        ],
        "targetBlank": false,
        "title": "Kubernetes",
        "type": "dashboards"
     }
  ],
  "panels": [
     {
        "datasource": {
           "type": "datasource",
           "uid": "-- Mixed --"
        },
        "fieldConfig": {
           "defaults": {
              "custom": {
                 "fillOpacity": 10,
                 "showPoints": "never",
                 "spanNulls": true
              }
           }
        },
        "gridPos": {
           "h": 7,
           "w": 24,
           "x": 0,
           "y": 0
        },
        "id": 1,
        "interval": "1m",
        "options": {
           "legend": {
              "asTable": true,
              "calcs": [
                 "lastNotNull"
              ],
              "displayMode": "table",
              "placement": "right",
              "showLegend": true
           },
           "tooltip": {
              "mode": "single"
           }
        },
        "pluginVersion": "v11.0.0",
        "targets": [
           {
              "datasource": {
                 "type": "prometheus",
                 "uid": "${datasource}"
              },
              "expr": "sum(\n    node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{namespace=\"$namespace\"}\n  * on(namespace,pod)\n    group_left(workload, workload_type) namespace_workload_pod:kube_pod_owner:relabel{namespace=\"$namespace\", workload=\"$workload\", workload_type=~\"$type\"}\n) by (pod)\n",
              "legendFormat": "__auto"
           }
        ],
        "title": "CPU Usage",
        "type": "timeseries"
     }
  ],
  "refresh": "5m",
  "schemaVersion": 39,
  "tags": [
     "kubernetes-mixin"
  ],
  "templating": {
     "list": [
        {
           "current": {
              "selected": true,
              "text": "default",
              "value": "default"
           },
           "hide": 0,
           "label": "Data source",
           "name": "datasource",
           "query": "prometheus",
           "regex": "",
           "type": "datasource"
        },
        {
           "datasource": {
              "type": "prometheus",
              "uid": "${datasource}"
           },
           "hide": 0,
           "label": "cluster",
           "name": "cluster",
           "query": "label_values(kube_namespace_status_phase{job=\"kube-state-metrics\"}, cluster)",
           "refresh": 2,
           "sort": 1,
           "type": "query"
        },
        {
           "datasource": {
              "type": "prometheus",
              "uid": "${datasource}"
           },
           "hide": 0,
           "label": "namespace",
           "name": "namespace",
           "query": "label_values(kube_namespace_status_phase{job=\"kube-state-metrics\", cluster=\"$cluster\"}, namespace)",
           "refresh": 2,
           "sort": 1,
           "type": "query"
        },
        {
           "datasource": {
              "type": "prometheus",
              "uid": "${datasource}"
           },
           "hide": 0,
           "includeAll": true,
           "label": "workload_type",
           "name": "type",
           "query": "label_values(namespace_workload_pod:kube_pod_owner:relabel{namespace=\"$namespace\"}, workload_type)",
           "refresh": 2,
           "sort": 1,
           "type": "query"
        },
        {
           "datasource": {
              "type": "prometheus",
              "uid": "${datasource}"
           },
           "hide": 0,
           "label": "workload",
           "name": "workload",
           "query": "label_values(namespace_workload_pod:kube_pod_owner:relabel{namespace=\"$namespace\", workload_type=~\"$type\"}, workload)",
           "refresh": 2,
           "sort": 1,
           "type": "query"
        }
     ]
  },
  "time": {
     "from": "now-1h",
     "to": "now"
  },
  "timezone": "browser",
  "title": "Compute Resources / Workload",
  "uid": "a164a7f0339f99e89cea5cb47e9be617"
}

.lint


exclusions:
  template-job-rule:
  template-instance-rule:
  target-job-rule:
  target-instance-rule:
settings:
  target-required-matchers-rule:
    matchers:
      - cluster="$cluster"

go run main.go lint --experimental --fix dashboard.json

@alex5517
Copy link
Author

alex5517 commented Nov 7, 2024

@rgeyer,

Also i made a few commits, i only saw the ref to #208 after...

But commit: 09e53e8 fixes an issue i found where if a template variable has matchers ,then it would not expand correctly example:

...

{
"current": {
   "selected": true,
   "text": "All",
   "value": "cluster=~\"$cluster\"\\, job=~\"($namespace)/((cortex|mimir|mimir-backend.*|mimir-read.*|querier.*|query-frontend.*|query-scheduler.*|ruler-querier.*|ruler-query-frontend.*|ruler-query-scheduler.*))\""
},
"hide": 0,
"includeAll": false,
"label": "Read path",
"multi": false,
"name": "read_path_matcher",
"options": [
   {
      "selected": true,
      "text": "All",
      "value": "cluster=~\"$cluster\"\\, job=~\"($namespace)/((cortex|mimir|mimir-backend.*|mimir-read.*|querier.*|query-frontend.*|query-scheduler.*|ruler-querier.*|ruler-query-frontend.*|ruler-query-scheduler.*))\""
   },
   {
      "selected": false,
      "text": "Main",
      "value": "cluster=~\"$cluster\"\\, job=~\"($namespace)/((cortex|mimir|mimir-backend.*|mimir-read.*|querier.*|query-frontend.*|query-scheduler.*))\""
   },
   {
      "selected": false,
      "text": "Remote ruler",
      "value": "cluster=~\"$cluster\"\\, job=~\"($namespace)/((ruler-querier.*|ruler-query-frontend.*|ruler-query-scheduler.*))\""
   }
]
...

As for c2e6171 it is a bit obscure that the user might have specified that cluster="$cluster" is required and then it is change to cluster=~"$cluster"...
But it is nice to have if you are linting a lot of dashboards where some have multi and some dont...
Let me know if you want to take it out, or if we should implement it in some other way where it is more obvious to the user...

Last but not least i will try to find time to have a look at #208 during this week.

@alex5517 alex5517 requested a review from rgeyer November 7, 2024 16:27
@rgeyer
Copy link
Collaborator

rgeyer commented Nov 12, 2024

Hey @alex5517. First off, thank you for taking another look, and proving some details on the conflate issue. I wasn't able to reproduce it so it seemed "fine" to me.

I didn't mean to create confusion by creating another PR. I had read your previous last message as "things look good, if conflate is resolved, this is ready to merge", so I went about resolving the conflicts with main.

Seems there's more to work through, so I won't rush it, and will continue to collaborate with you.

@alex5517
Copy link
Author

Hi @rgeyer,

Sounds good that you weren't able reproduce it perhaps it is something specific to my environment...

And it is fine with the other PR, the commits made after was not something i was not aware of back then :)

Depending on your policies, then perhaps it would be better to have #208 merged, since the things i have seen failing are concerned with rules that are experimental anyways, and the PR might just get harder to get merged over time...

A separate issue/pr could then be made for the other "bugs"

If you are fine with merging #208 i hope you will consider including the 2 new commits i made here: 09e53e8
c2e6171

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants