i have the following configurations on kubernates:
apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{ include "e-lucene-eva.fullname" . }} labels: app.kubernetes.io/name: {{ include "e-lucene-eva.name" . }} helm.sh/chart: {{ include "e-lucene-eva.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app.kubernetes.io/name: {{ include "e-lucene-eva.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} template: metadata: labels: app.kubernetes.io/name: {{ include "e-lucene-eva.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: evaptr containerPort: 8089 resources: resources: limits: cpu: 150m memory: 1528Mi requests: cpu: 100m memory: 664Mi apiVersion: v1 kind: Service metadata: name: st-evabot-adm-backend-service namespace: st-evabot spec: ports: - targetPort: bck-port port: 80 protocol: TCP selector: app: evabot tier: backend
but when i run kubectl get endpoints:
NAME ENDPOINTS AGE evabot-db-service 191.255.54.169:27017 19h evabot-lucene-service 191.255.48.148:8089,191.255.54.169:8089,191.255.55.35:8089 13h
why i have 3 ip ? and other has just 1?
Another question is .. these ips are for the connections with other pods ?apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: null generation: 1 labels: app.kubernetes.io/instance: st-evabot-evabot-lucene app.kubernetes.io/managed-by: Tiller app.kubernetes.io/name: e-lucene-eva helm.sh/chart: e-lucene-eva-0.1.1 name: st-evabot-evabot-lucene-e-lucene-eva selfLink: /apis/extensions/v1beta1/namespaces/st-evabot/deployments/st-evabot-evabot-lucene-e-lucene-eva spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/instance: st-evabot-evabot-lucene app.kubernetes.io/name: e-lucene-eva strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: st-evabot-evabot-lucene app.kubernetes.io/name: e-lucene-eva spec: containers: - image: registry-dgt.eni.com/eni/evabot-lucene:1.0.4 imagePullPolicy: IfNotPresent name: e-lucene-eva ports: - containerPort: 8089 name: evaptr protocol: TCP resources: limits: cpu: 150m memory: 1528Mi requests: cpu: 100m memory: 664Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: {} evabot-adm-ui-service ClusterIP 191.255.46.131 <none> 3031/TCP 21h app=evabot,tier=frontend evabot-lucene-service ClusterIP 191.255.46.28 <none> 8089/TCP 2h app=evabot st-evabot-adm-backend-service NodePort 191.255.45.200 <none> 80:31971/TCP 8m app=evabot,tier=backend st-evabot-db-service ClusterIP 191.255.47.200 <none> 27017/TCP 31m app=mongo-proxy st-evabot-evabot-db-mongodb-replicaset ClusterIP None <none> 27017/TCP 3d app=mongodb-replicaset,release=st-evabot-evabot-db st-evabot-fe-evabot-web-apache-webserver ClusterIP 191.255.45.37 <none> 80/TCP 1d app=apache-webserver,release=st-evabot-fe-evabot-web tiller-deploy ClusterIP 191.255.44.63 <none> 44134/TCP 6d app=helm,name=tiller
Answer
I was not able to find neither app=evabot
nor tier=frontend
labels through the provided deployment manifest file, therefore the services evabot-db-service
and evabot-lucene-service
are exposing a different deployments(Pods).
However, you can easily retrieve information from your cluster about all components which are related to the labels app=evabot
or tier=frontend
:
kubectl get all -l app=evabot
kubectl get all -l tier=frontend
In general, Endpoint
is the final point of your request where the Service will route network traffic to when a connection has been established to the Pods IP address. They can be auto discovered by Kubernetes with selectors or manually managed.
I assume that for evabot-lucene-service
service in your example, possibly three Pods have been created by ReplicaSet, that are finally serving requests by their Endpoints:
evabot-lucene-service
191.255.48.148:8089,191.255.54.169:8089,191.255.55.35:8089
For evabot-db-service
service, only one Pod represents the Endpoint.
evabot-db-service 191.255.54.169:27017
In case of any doubt or question, write a comment below this answer.
Attribution
Source : Link , Question Author : T-student , Answer Author : Nick_Kh