Node Selector
- Worker node에 할당된 label을 이용해 node 선택
- node Label 설정
kubectl label nodes <노드 이름> <레이블 키>=<레이블 값>
kubectl label nodes node1.example.com gpu=true
kubectl get nodes -L gpu
# kubectl label node node{1,2}.example.com gpu=true
node/node1.example.com not labeled
node/node2.example.com not labeled
# kubectl get node -L gpu
NAME STATUS ROLES AGE VERSION GPU
master.example.com Ready control-plane 63d v1.25.4
node1.example.com Ready <none> 63d v1.25.4 true
node2.example.com Ready <none> 63d v1.25.4 true
tensorflow-gpu.yaml은 gpu가 true로 설정된 node에 pod 생성
# cat tensorflow-gpu.yaml
apiVersion: v1
kind: Pod
metadata:
name: tensorflow-gpu
spec:
containers:
- name: tensorflow
image: tensorflow/tensorflow:nightly-jupyter
ports:
- containerPort: 8888
protocol: TCP
nodeSelector:
gpu: "true"
# kubectl apply -f tensorflow-gpu.yaml
pod/tensorflow-gpu created
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tensorflow-gpu 1/1 Running 0 2m23s 10.44.0.1 node2.example.com <none> <none>
Affinity&antiAffinity
- 노드의 특정 집합에만 pod를 스케줄 하도록 지시
- nodeSelector : selector field에 명시된 모든 LABEL이 포함되어야 배치됨
- nodeAffinity : 특정 노드에만 pod가 실행되도록 유도
- nodeAffinity 요구 조건
1. 엄격한 요구 : requiredDuringSchedulingIgnoreDuringExecution
→ 해당 값이 꼭 있어야 실행
2. 선호도 요구 : preferredDuringSchedulingIgnoreDuringExecution
→ 해당 값이 있으면 가중치를 주어서 가중치가 높은곳에 실행
# kubectl label node node2.example.com disktype=ssd
node/node2.example.com labeled
# kubectl get node -L gpu,disktype
NAME STATUS ROLES AGE VERSION GPU DISKTYPE
master.example.com Ready control-plane 63d v1.25.4
node1.example.com Ready <none> 63d v1.25.4 true
node2.example.com Ready <none> 63d v1.25.4 true ssd
disktype이 꼭 있어야하며, weight 값이 높은 곳에 pod를 실행시키는 yaml 파일
→ gpu가 있으면 weight +10, disktype이 있으면 weight +10
# cat tensorflow-gpu-ssd.yaml
apiVersion: v1
kind: Pod
metadata:
name: tensorflow-gpu-ssd
spec:
containers:
- name: tensorflow
image: tensorflow/tensorflow:nightly-jupyter
ports:
- containerPort: 8888
protocol: TCP
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- {key: disktype, operator: Exists}
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 10
preference:
matchExpressions:
- {key: gpu, operator: In, values: ["true"]}
- {key: disktype, operator: In, values: ["ssd"]}
yaml 파일을 실행시키면 disktype, gpu가 설정된 node2에서 실행되는것을 확인할 수 있음
# kubectl apply -f tensorflow-gpu-ssd.yaml
pod/tensorflow-gpu-ssd created
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tensorflow-gpu-ssd 1/1 Running 0 15s 10.44.0.1 node2.example.com <none> <none>
node에 생성한 label 삭제
# kubectl label node node{1,2}.example.com disktype-
label "disktype" not found.
node/node1.example.com not labeled
node/node2.example.com unlabeled
# kubectl label node node{1,2}.example.com gpu-
node/node1.example.com unlabeled
node/node2.example.com unlabeled
# kubectl get node -L gpu,disktype
NAME STATUS ROLES AGE VERSION GPU DISKTYPE
master.example.com Ready control-plane 63d v1.25.4
node1.example.com Ready <none> 63d v1.25.4
node2.example.com Ready <none> 63d v1.25.4
- pod 스케줄링
1. podAffinity : pod를 더 가깝게 배치하기
2. podAntiAffinity : pod를 더 멀리 배치하기
- podAffinity 요구 조건
1. 엄격한 요구 : requiredDuringSchedulingIgnoreDuringExecution
2. 선호도 요구 : preferredDuringSchedulingIgnoreDuringExecution
- topologyKey
1. 노드 label을 이용해 pod의 affinity와 antiaffinity를 설정할 수 있는 또 하나의 기준
2. 쿠버네티스는 Pod를 스케줄링 할 때 먼저 pod의 label을 기준으로 대상 노드를 찾고, 이후 topologyKey 필드를 확인해
해당 노드가 원하는 노드인지 확인
podAffinity 관련 label app=backend 라는 pod를 실행시킴
# kubectl run backend -l app=backend --image=busybox -- sleep 9999999
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 94s 10.44.0.1 node2.example.com <none> <none>
Affinity에서 label이 app: backned 로 설정된 node에 pod를 생성시키는 yaml 파일
# cat pod-affinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 5
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: backend
topologyKey: kubernetes.io/hostname
containers:
- name: main
image: busybox
args:
- sleep
- "99999"
# kubectl apply -f pod-affinity.yaml
deployment.apps/frontend created
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 2m18s 10.44.0.1 node2.example.com <none> <none>
frontend-549897d58d-bjgrq 1/1 Running 0 34s 10.44.0.3 node2.example.com <none> <none>
frontend-549897d58d-mfxpm 1/1 Running 0 34s 10.44.0.4 node2.example.com <none> <none>
frontend-549897d58d-pgbpn 1/1 Running 0 34s 10.44.0.6 node2.example.com <none> <none>
frontend-549897d58d-t8f9c 1/1 Running 0 34s 10.44.0.5 node2.example.com <none> <none>
frontend-549897d58d-z75kh 1/1 Running 0 34s 10.44.0.2 node2.example.com <none> <none>
Affinity와 반대로 label이 app: backned 로 설정된 node가 아닌곳에 pod를 배치시킴
# cat pod-antiaffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 5
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: backend
topologyKey: kubernetes.io/hostname
containers:
- name: main
image: busybox
args:
- sleep
- "99999"
# kubectl apply -f pod-antiaffinity.yaml
deployment.apps/frontend created
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 9m12s 10.44.0.1 node2.example.com <none> <none>
frontend-55b5894b44-4fg75 1/1 Running 0 2m10s 10.36.0.3 node1.example.com <none> <none>
frontend-55b5894b44-9dnhz 1/1 Running 0 2m10s 10.36.0.4 node1.example.com <none> <none>
frontend-55b5894b44-bmfx6 1/1 Running 0 2m10s 10.36.0.2 node1.example.com <none> <none>
frontend-55b5894b44-jz7hh 1/1 Running 0 2m10s 10.36.0.6 node1.example.com <none> <none>
frontend-55b5894b44-w6hxl 1/1 Running 0 2m10s 10.36.0.5 node1.example.com <none> <none>
[참고]
- 유투브 따배쿠 강의
'Kubernetes' 카테고리의 다른 글
[Kubernetes] Kubernetes 인증 (1) | 2023.01.29 |
---|---|
[Kubernetes] taint&toleraton, cordon&drain (0) | 2023.01.27 |
[Kubernetes] Istio 정리 (0) | 2023.01.25 |
[Kubernetes] Secret (0) | 2022.12.21 |
[Kubernetes] ConfigMap (0) | 2022.12.20 |