-
Notifications
You must be signed in to change notification settings - Fork 979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enabled service to have multiple ports, and the serviceMonitor to mon… #988
Conversation
…itor multiple endpoints, added relevant configuration examples as comments to values.yaml
…es for new chart fields
stable/aws-for-fluent-bit/README.md
Outdated
| `serviceMonitor.interval`| Set how frequently Prometheus should scrape |`30s`| | ||
| `serviceMonitor.telemetryPath`| Set path to scrape metrics from |`/api/v1/metrics/prometheus`| | ||
| `serviceMonitor.labels`| Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator |`[]`| | ||
| `serviceMonitor.timeout`| Set timeout for scrape |`10s`| | ||
| `serviceMonitor.relabelings`| Set relabel_configs as per [details](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) |`[]`| | ||
| `serviceMonitor.targetLabels`| Set of labels to transfer on the Kubernetes Service onto the target. |`[]`| | ||
| `serviceMonitor.metricRelabelings`| MetricRelabelConfigs to apply to samples before ingestion. |`[]`| | ||
| `serviceMonitor.extraEndpoints`| Extra endpoints on the fluen-bit service for the serviceMonitor to monitor |`[]`| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the fluen-bit service
Typo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good spot - fixing that now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
@tmakamure need to update chart version to get CI to pass:
|
{{- end }} | ||
{{- with .Values.serviceMonitor.extraEndpoints }} | ||
{{- toYaml . | nindent 2 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this template correct?
I tried generating it, and it didn't work:
This was in values.yaml
metricRelabelings: []
extraEndpoints: []
- port: metrics
path: /metrics
interval: 30s
scrapeTimeout: 10s
scheme: http
And this was the error output from template dry run:
Error: failed to parse stable/aws-for-fluent-bit/values.yaml: error converting YAML to JSON: yaml: line 387: did not find expected key
helm.go:84: [debug] error converting YAML to JSON: yaml: line 387: did not find expected key
failed to parse stable/aws-for-fluent-bit/values.yaml
helm.sh/helm/v3/pkg/cli/values.(*Options).MergeValues
helm.sh/helm/v3/pkg/cli/values/options.go:55
main.runInstall
helm.sh/helm/v3/cmd/helm/install.go:212
main.newTemplateCmd.func2
helm.sh/helm/v3/cmd/helm/template.go:82
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:872
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:990
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:918
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_amd64.s:1594
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PettitWesley I have ammended the test intsrctions see below: the template is correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PettitWesley I have also updated the chart version
@@ -11,6 +11,14 @@ spec: | |||
port: {{ .Values.serviceMonitor.service.port }} | |||
protocol: TCP | |||
targetPort: {{ .Values.serviceMonitor.service.targetPort }} | |||
{{- if .Values.serviceMonitor.service.extraPorts }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the template for the extraEndpoints below needs to be similar to this, explicitly model each field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PettitWesley the template is correct I double checked now. I have made the ammendmends on the instrustions on how to do the the test run (Should have included that in PR message too):
Changes in Values
- Before testing the chart, the
values.yaml
file needs to be updated as below:
NB: (Note that we remove the
[]
on lines 387 and 371 and set the service monitor to enabled
serviceMonitor:
service:
type: ClusterIP
port: 2020
targetPort: 2020
extraPorts: # Removing the empty array placeholder [] because we are now pupulating the array
- port: 2021
targetPort: 2021
protocol: TCP
name: metrics
## When set true then use a ServiceMonitor to configure scraping
enabled: true # Enabling the servicMonitor
interval: 30s
telemetryPath: /api/v1/metrics/prometheus
labels:
# app: example-application
# teamname: example
timeout: 10s
relabelings: []
targetLabels: []
metricRelabelings: []
extraEndpoints: # removing the empty array placeholder [] because now we are populating the array
- port: metrics
path: /metrics
interval: 30s
scrapeTimeout: 10s
scheme: http
- In addition, the
serviceMonitor
object is a CRD created by the promehteus community. If your are doing helm template locally usinghelm 3.x
please see below:
Templating locally
Helm does not automatically recognise CRDs, to be able to output the template including CRDs we need to:
- Download the
serviceMonitor
CRD from the prometheus operator Repo: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd-full/monitoring.coreos.com_servicemonitors.yaml - We add this YAML file to our chart directory either in the root together with Chart.yamland values.yaml or under the templates directory. This will help helm to understand the CRD resource.
- Next in the
serviceMonitor.yaml
template on line two, there is a condition:{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.serviceMonitor.enabled ) }}
- which checks if the cluster API version has the "monitoring.coreos.com/v1" capabillity. For a dry run, since we are not installing this on an actual cluster with the relevant API version installed, we may temporarily disable that check by changing theand
to anor
. (If you are test installing on actual cluster - please see the next heading.) - Once all this is setup - you can then run
helm template .
and inspect the output forkind: Service
and thekind: ServiceMonitor
- Revert the condition in
serviceMonitor.yaml
template back toand
AND delete the serviceMonitor CRD file from the chart directory - In values.yaml - comment out lines [372-375] amnd [388 - 392], place back the the empty array place holders
[]
on lines 371 and 387, and set theenabled: false
for the serviceMonitor such that theconfig looks like below:
serviceMonitor:
service:
type: ClusterIP
port: 2020
targetPort: 2020
extraPorts: []
# - port: 2021
# targetPort: 2021
# protocol: TCP
# name: metrics
## When set true then use a ServiceMonitor to configure scraping
enabled: false
interval: 30s
telemetryPath: /api/v1/metrics/prometheus
labels:
# app: example-application
# teamname: example
timeout: 10s
relabelings: []
targetLabels: []
metricRelabelings: []
extraEndpoints: []
# - port: metrics
# path: /metrics
# interval: 30s
# scrapeTimeout: 10s
# scheme: http
Installing on test cluster
If you are testing by installing the helm chart on a cluster - you need to ensure that you first install the prometheus-operator
CRDs on the cluster - especially the serviceMonitor
in our case.
Example using minikube cluster
- Download the serviceMonitor CRD from the prometheus operator repo: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd-full/monitoring.coreos.com_servicemonitors.yaml
- Run
kubectl apply -f </path/to/serviceMonitor.yaml>
to install the CRD on the cluster (this is so that the condition in serviceMonitor.yaml template passes) - In the values.yaml of our chart:
- Enable the serviceMonitor
- Uncomment the lines [372-375] and lines [388-392]
- remove the empty array placeholders
[]
on lines 387 and 371
- The config for service Monitor should now look like:
serviceMonitor:
service:
type: ClusterIP
port: 2020
targetPort: 2020
extraPorts:
- port: 2021
targetPort: 2021
protocol: TCP
name: metrics
## When set true then use a ServiceMonitor to configure scraping
enabled: true
interval: 30s
telemetryPath: /api/v1/metrics/prometheus
labels:
# app: example-application
# teamname: example
timeout: 10s
relabelings: []
targetLabels: []
metricRelabelings: []
extraEndpoints:
- port: metrics
path: /metrics
interval: 30s
scrapeTimeout: 10s
scheme: http
- Run
helm install aws-for-fluentbit .
from the chart directory. Helm will install the chart. - To get the service monitor run
kubectl get serviceMonitor -n <namespace where chart is deployed>
and you will see the service monitor. - Run
kubectl describe serviceMonitor <serviceMonitor name from previous output> -n <the namespace where it is installed>
and inspect the out put. - To see the yaml output run:
kubectl get serviceMonitor <serviceMonitor name> -n <namespace of serviceMonitor> -o yaml
and inspect the output - Do the same for the
Service
resource
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
I ran the dry run again without setting any of the new values, which shows the current state of the values.yaml and template won't break anyone who doesn't use these new fields (no regressions). |
Enabled service to have multiple ports, and the serviceMonitor to monitor multiple endpoints on the service, added relevant configuration examples as comments to values.yaml and updated the README file for the new chart fields.
Issue
#987
Description of changes
serviceMonitor.yaml
template to allow for more service endpoints to be monitoredservice.yaml
template to allow for more port to be exposed on the fluent-bit servicevalues.yaml
file to add the new config keys, and comments on how to configure themREADME
file of the chart to add configuration notes for the new config fieldsChecklist
README.md
for modified charts)version
inChart.yaml
for the modified chart(s)Testing
After making the changes to the chart, I ran
helm template .
in the chart directory and inspected the output to ensure it is producing the desired effect. An excerpt of the output is as shown below:Pre-Requisites to Testing Procedure
Changes in Values
values.yaml
file needs to be updated as below:serviceMonitor
object is a CRD created by the promehteus community. If your are doing helm template locally usinghelm 3.x
please see below:Templating locally
Helm does not automatically recognise CRDs, to be able to output the template including CRDs we need to:
serviceMonitor
CRD from the prometheus operator Repo: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd-full/monitoring.coreos.com_servicemonitors.yamlserviceMonitor.yaml
template on line two, there is a condition:{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.serviceMonitor.enabled ) }}
- which checks if the cluster API version has the "monitoring.coreos.com/v1" capabillity. For a dry run, since we are not installing this on an actual cluster with the relevant API version installed, we may temporarily disable that check by changing theand
to anor
. (If you are test installing on actual cluster - please see the next heading.)helm template .
and inspect the output forkind: Service
and thekind: ServiceMonitor
serviceMonitor.yaml
template back toand
AND delete the serviceMonitor CRD file from the chart directory[]
on lines 371 and 387, and set theenabled: false
for the serviceMonitor such that theconfig looks like below:Installing on test cluster
If you are testing by installing the helm chart on a cluster - you need to ensure that you first install the
prometheus-operator
CRDs on the cluster - especially theserviceMonitor
in our case.Example using minikube cluster
kubectl apply -f </path/to/serviceMonitor.yaml>
to install the CRD on the cluster (this is so that the condition in serviceMonitor.yaml template passes)[]
on lines 387 and 371helm install aws-for-fluentbit .
from the chart directory. Helm will install the chart.kubectl get serviceMonitor -n <namespace where chart is deployed>
and you will see the service monitor.kubectl describe serviceMonitor <serviceMonitor name from previous output> -n <the namespace where it is installed>
and inspect the out put.kubectl get serviceMonitor <serviceMonitor name> -n <namespace of serviceMonitor> -o yaml
and inspect the outputService
resourceBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.