当前位置:网站首页>How the query address of cloud native monitoring data exposes the public network

How the query address of cloud native monitoring data exposes the public network

2022-06-24 04:55:00 Nieweixing

prometheus Now it's mainstream monitoring k8s programme , All major cloud vendors also have managed k8s service , In order to better manage k8s Cluster monitoring , Also launched managed prometheus Monitoring service , Tencent cloud launched the Tencent cloud native monitoring service (Tencent Prometheus Service,TPS) Hereinafter referred to as TPS,TPS

One click deployment prometheus To the back-end elastic cluster , Then you can monitor your tke Cluster to monitor .

Tencent cloud's TPS The service backend uses thanos framework , In order to make it easier to query monitoring indicators , Provides thanos query The front page address is used to query the monitoring data , But this address only provides intranet access , A lot of times , We are pc The client cannot be accessed directly vpc The intranet address of , It is very inconvenient to query data , Now let's talk about how to expose TPS Data query address to the public network .

Actually, the plan is very simple , It's with TPS Same as vpc Under the tke Deploy one in the cluster nginx The reverse proxy appears , And then to nginx To configure a nodeport Or on the public network clb Type of service that will do , The specific configuration steps are as follows .

1. Query the cloud native monitoring instance data query address

The data query address of cloud native monitoring , You can query on the basic information page of the instance ,Prometheus The data query address is thanos query The address of

2. establish nginx Forwarding profile

Create a configmap To configure default.conf Files are used to forward TPS Of thanos query Search address , Be careful proxy_pass Fill in your actual intranet query address later , Next, mount this configmap To pod Just inside

apiVersion: v1
data:
  default.conf: |-
    server {
        listen       80;
        listen  [::]:80;
        server_name  localhost;

        #access_log  /var/log/nginx/host.access.log  main;

        location / {
            proxy_pass http://10.0.0.234:9090/;
            proxy_set_header X-Real-IP $remote_addr;
            client_max_body_size    1000m;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
  
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
kind: ConfigMap
metadata:
  name: nginx-cm
  namespace: monitor

3. establish nginx Of workload

Create a nginx Workload for , Then mount the configuration in the previous step to the container default.conf file .

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: tps-thanos-nginx
    qcloud-app: tps-thanos-nginx
  name: tps-thanos-nginx
  namespace: monitor
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: tps-thanos-nginx
      qcloud-app: tps-thanos-nginx
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        k8s-app: tps-thanos-nginx
        qcloud-app: tps-thanos-nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: tps-thanos-nginx
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 256Mi
        securityContext:
          privileged: false
        volumeMounts:
        - mountPath: /etc/nginx/conf.d/default.conf
          name: vol
          subPath: default.conf
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: qcloudregistrykey
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: nginx-cm
        name: vol

4. Generate service binding workload

establish servic Bound to the backend workload, If you don't want to spend money on the Internet clb, It can be used nodeport Type of svc, Through the public network of nodes ip and nodeport visit .

apiVersion: v1
kind: Service
metadata:
  name: tps-thanos-nginx
  namespace: monitor
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: 80-80-tcp
    nodePort: 31642
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    k8s-app: tps-thanos-nginx
    qcloud-app: tps-thanos-nginx
  sessionAffinity: None
  type: LoadBalancer

5. ingress Expose the domain name to provide access

If the cluster is deployed nginx-ingress, It can also be used. nginx-ingress To expose a domain name to access , You can also use it tke The type of load balancing provided ingress Expose the domain name to provide access .

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: ingress
  name: tps-thanos-ingress
  namespace: monitor
spec:
  rules:
  - host: tps-thanos.tke.niewx.cn
    http:
      paths:
      - backend:
          serviceName: tps-thanos-nginx
          servicePort: 80
        path: /

6. Public network access query address

Browser input public network clb Of vip visit TPS Of prometheus Data query address

Browser input node public network ip and nodeport visit TPS Of prometheus Data query address

Browser domain name access TPS Of prometheus Data query address

Then we can go through the browser ui Page to query prometheus We've got a lot of monitoring data .

原网站

版权声明
本文为[Nieweixing]所创,转载请带上原文链接,感谢
https://yzsam.com/2021/08/20210831153756006O.html