Skip to content

EN_6_svc_policy

myf5 edited this page Dec 5, 2021 · 2 revisions

k8s Service Egress Policy Use case

A policy for namespace/project will be set for all microservices under the ns/project. In the case of microservices, a namespace/project will have multiple relatively autonomous microservices. These microservices will have independent egress policy rule requirements. For example, microservice A needs to access X API, and microservice B needs to access Y API, microsservice C does not need to access external resources of the cluster. In such a scenario, it is necessary to set policy for different k8s services.

image-20211124111140911

Policy setting

  1. First create the target service to be accessed. The service must be located under the corresponding namespace. Support IP, domain name, protocol combination, support any protocol only IP:
kind: ExternalService
apiVersion: kubeovn.io/v1alpha1
metadata:
   name: ns600-linjing-io
   namespace: ns-600
spec:
  addresses:
    - linjing.io
  ports:
    - name: tcp-80
      protocol: TCP
      port: "80"
  1. Create a rule for k8s svc myapp and allow it to access linjing.io. At the same time, enable logging for the rule (the global configmap needs to be configured to allow local or remote logging)
apiVersion: kubeovn.io/v1alpha1
kind: ServiceEgressRule
metadata:
  name: myapp-to-linjing-io
  namespace: ns-600
spec:
  service: myapp
  action: accept-decisively
  externalServices:
    - ns600-linjing-io
  logging: true

Verify

There are currently two different services under ns-600:

[root@ovnmaster ~]# kubectl get pod -n ns-600
NAME                     READY   STATUS    RESTARTS   AGE
myapp-648bc84478-d6sv2   1/1     Running   0          23d
tmp-shell-ns600          1/1     Running   1          176d

Enter the container of myapp, you can visit linjing.io:

[root@ovnmaster ~]# kubectl exec -it myapp-648bc84478-d6sv2 -n ns-600 -- sh

~ # curl -I linjing.io
HTTP/1.1 200 OK
Date: Tue, 30 Nov 2021 01:49:04 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
last-modified: Sat, 18 Sep 2021 05:31:50 GMT
access-control-allow-origin: *
expires: Tue, 30 Nov 2021 01:59:04 GMT
cache-control: max-age=600
x-proxy-cache: MISS
x-github-request-id: 8EC0:6FAF:CC1A72:D7CD0A:61A5830F
via: 1.1 varnish
age: 0
x-served-by: cache-tyo11931-TYO
x-cache: MISS
x-cache-hits: 0
x-timer: S1638236944.975011,VS0,VE156
vary: Accept-Encoding
x-fastly-request-id: e9ccc4fdf08a4e83c6bc9bbec67515a5a4e8ce7f
CF-Cache-Status: DYNAMIC
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=kfd1PJRWufdfGilSz4krnmA955bnuSJz%2FUD1rMSfbchq44BUllZLzS7H9R5l6r7Lo%2Byc158ybPJRvG4EBPApU1WI59Q1JY9%2FtVfg5fwYr9GyjefoD2%2BMRnHWyPUQ"}],"group":"cf-nel","max_age":604800}
NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Server: cloudflare
CF-RAY: 6b606ac3cf7880bf-NRT

Enter the tmp-shell-ns60 container, unable to access linjing.io:

[root@ovnmaster ~]# kubectl exec -it tmp-shell-ns600 -n ns-600 -- sh
~ # curl -I linjing.io

Scale the myapp service, enter the new pod myapp-648bc84478-qzjrl for testing, and confirm that you can access:

[root@ovnmaster ~]# kubectl get pod -n ns-600
NAME                     READY   STATUS    RESTARTS   AGE
myapp-648bc84478-d6sv2   1/1     Running   0          23d
myapp-648bc84478-qzjrl   1/1     Running   0          6s
tmp-shell-ns600          1/1     Running   1          176d
[root@ovnmaster ~]# kubectl exec -it myapp-648bc84478-qzjrl -n ns-600 -- sh

~ # curl -I linjing.io
HTTP/1.1 200 OK
Date: Tue, 30 Nov 2021 01:54:49 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
last-modified: Sat, 18 Sep 2021 05:31:50 GMT
access-control-allow-origin: *
expires: Tue, 30 Nov 2021 01:59:04 GMT
cache-control: max-age=600
x-proxy-cache: MISS
x-github-request-id: 8EC0:6FAF:CC1A72:D7CD0A:61A5830F
via: 1.1 varnish
age: 0
x-served-by: cache-tyo11978-TYO
x-cache: MISS
x-cache-hits: 0
x-timer: S1638237289.160179,VS0,VE157
vary: Accept-Encoding
x-fastly-request-id: bb4452a1491ed4a1ed8ae199b510be4fe2e22c3e
CF-Cache-Status: DYNAMIC
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=P3AkBo1nLsiNTUsDw4L%2BafxG8FzLNnN2kr91EfHKOGVRxBS2IKLq6DhcEitSBoFqc7Zr7bavzh%2Bp%2BhqfTVvgNheCJYx9vBWHZKplRPv1YOEqTonpuiQnrtSKt6A2"}],"group":"cf-nel","max_age":604800}
NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Server: cloudflare
CF-RAY: 6b6073312a0d809b-NRT

Next Step

Tenants strict isolation mode