目录
- Deployment
- Create Deployment
- kubectl apply/create
- Network Port Mapping and Update Deployment
- ReplicaSet
Before this article, you need to read:
- Try kubeadm
https://www.cnblogs.com/whuanle/p/14679590.html
https://www.whuanle.cn/archives/1230
- Deployment tutorial in CKAD certification
https://www.cnblogs.com/whuanle/p/14679922.html
https://www.whuanle.cn/archives/1231
Before this, you need to deploy a master node using kubeadm, and it is best to also deploy a worker node (kubeadm join
). In this article, we will deploy an Nginx instance and learn about Deployment configuration, network mapping, and ReplicaSets.
Deployment
Deployment is a self-healing mechanism provided by Kubernetes to address the issue of maintaining machine failures.
When we deploy applications using Docker alone, we can use the --restart=always
parameter to restart the application after it crashes, for example:
docker run -itd --restart=always -p 666:80 nginx:latest
However, this method can only restart the container and does not have the capability to recover from machine failures.
Kubernetes Deployment is a configuration that directs Kubernetes on how to create and update the application instances you deploy. After creating a Deployment, the Kubernetes master schedules the application to various nodes in the cluster. Kubernetes Deployment offers a unique approach to application management.
There are two ways to create Deployments: one is to create them directly using a command, and the other is through YAML. We will cover both methods later.
Create Deployment
Let’s deploy an Nginx application.
kubectl create deployment nginx --image=nginx:latest
By executing docker ps
on the worker node, you can see:
root@instance-2:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fe7433f906a0 nginx "/docker-entrypoint.…" 7 seconds ago Up 6 seconds k8s_nginx_nginx-55649fd747-wdrjj_default_ea41dcc4-94fe-47f9-a804-5b5b1df703e9_0
Get all deployments:
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 2m24s
Using kubectl describe deployment nginx
can provide more detailed information.
Using kubectl get events
can provide detailed event records from the creation of the Deployment to the deployment of the container.
Successfully assigned default/nginx-55649fd747-wdrjj to instance-2
Pulling image "nginx:latest"
Successfully pulled image "nginx:latest" in 8.917597859s
Created container nginx
Started container nginx
Created pod: nginx-55649fd747-wdrjj
Scaled up replica set nginx-55649fd747 to 1
We can also create a Deployment using a YAML file:
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
Export YAML
Regardless of the deployment method, we can export a YAML file from an already created Deployment using -o yaml
(-o json
for JSON export).
kubectl get deployment nginx -o yaml
# Save to file
# kubectl get deployment nginx -o yaml > mynginx.yaml
Then the terminal will print:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-04-21T00:37:13Z"
generation: 1
labels:
app: nginx
name: nginx
namespace: default
... ...
We can try to export the YAML to a file named mynginx.yaml, then we delete this Deployment.
kubectl delete deployment ngin
Then we can create a Deployment again using the exported mynginx.yaml.
kubectl apply -f mynginx.yaml
kubectl apply/create
When we create a Deployment, kubectl create
and kubectl apply
have the same effect, but apply
also has the update functionality.
kubectl apply
finds the differences among the previous configuration, the provided input, and the current configuration of the resource to determine how to modify the resource. The kubectl apply
command will compare the pushed version with the previous version and apply the changes you made, but will not automatically overwrite any properties you did not specify.
Additionally, there are kubectl replace
and kubectl edit
. kubectl replace
is a destructive update/replacement and can easily cause issues; kubectl edit
allows updates to the deployment.
According to Kubernetes official documentation, you should always use kubectl apply
or kubectl create --save-config
to create resources.
Here’s another note about the difference in creating Deployments.
When using create, the command format is:
kubectl create deployment {deployment name} --image={image name}
When using apply, the YAML file needs to specify some information:
kind: Deployment
... ...
metadata:
name: nginx
... ...
spec:
containers:
- image: nginx:latest
Then execute kubectl apply -f xxx.yaml
file.
One is kubectl create deployment
; the other is kubectl apply -f
, specifying kind: Deployment
in YAML.
Sometimes we may not know whether our creation command or YAML is correct, and we can use --dry-run=client
, which allows us to preview without actually submitting.
kubectl create deployment testnginx --image=nginx:latest --dry-run=client
In some k8s certifications, we may not have the time to write YAML step by step, but we need customization. In this case, we can use --dry-run=client -o yaml
, which will not affect the Deployment and will export the YAML file.
kubectl create deployment testnginx --image=nginx:latest --dry-run=client -o yaml
Other Kubernetes objects can also use this method. The format is kubectl {object} {parameters} --dry-run=client -o yaml
.
Kubernetes objects/resources include Deployment, Job, Role, Namespace, etc.
Another point to mention is that when we use kubectl get xxx
, it doesn’t matter whether or not to include s
. For example, kubectl get nodes
/ kubectl get node
are both the same.
However, generally from a semantic perspective, when obtaining all objects, we can use kubectl get nodes
, and when obtaining a specific object, we can use kubectl get node nginx
. Similarly, kubectl describe nodes
and kubectl describe node nginx
. In reality, it doesn't make a difference whether s
is included or not.
Network Port Mapping and Update Deployment
For Docker, when we need to map ports, we can use docker ... -p 6666:80
. So how do we handle this for deploying container applications with deployments?
We can look at the https://k8s.io/examples/controllers/nginx-deployment.yaml file,
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Here we do not directly use this YAML file; we continue to use the previous YAML file. First, we deploy an Nginx.
kubectl apply -f mynginx.yaml
Then modify the mynginx.yaml file, find image: nginx:latest
, and add the port mapping at the end.
spec:
containers:
- image: nginx:latest
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
... ...
Note: Added
ports:
- containerPort: 80
protocol: TCP
these three lines.
Then delete the two lines:
resourceVersion: "109967"
uid: e66201e3-a740-4c1c-85f5-a849db40a0fd
Because these two fields limit the version and uid, this way when replacing or updating, we can directly operate on the nginx's deployment.
Check deployment and pod:
kubectl get deployment,pod
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 5m44s
NAME READY STATUS RESTARTS AGE
pod/nginx-55649fd747-9vfrx 1/1 Running 0 5m44s
Then we create the service.
kubectl expose deployment nginx
Or specify the port:
kubectl expose deployment nginx --port=80 --target-port=8000
Check the service:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h
nginx ClusterIP 10.101.245.225 <none> 80/TCP 5m57s
Check the ports:
kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.170.0.2:6443 25h
nginx 192.168.56.24:80 17m
Check Pod information:
kubectl describe pod nginx | grep Node:
Node: instance-2/10.170.0.4
Because the application/pod deployed by the deployment does not run on the master, different Nodes cannot be accessed.
Using kubectl get services
or kubectl get ep
to query the IP, we can access it directly on the worker node.
For example, the IP queried by the author can be accessed in the worker like this.
curl 192.168.56.24:80
curl 10.101.245.225:80
ReplicaSet
When we execute kubectl get deployments
, the output is:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 38m
- The
NAME
column lists the names of the Deployments in the cluster. - The
READY
shows the number of available replicas of the application. The displayed pattern is "ready count/desired count". - The
UP-TO-DATE
shows the number of replicas that have been updated to reach the desired state. - The
AVAILABLE
shows the number of replicas that are available for users. - The
AGE
shows how long the application has been running.
Replicas
According to the cloud-native 12-factor methodology and core principles, a Processes
in containerized applications should be stateless, and any persistent data should be stored in backend services. Therefore, one image can start N Docker containers, with ports 801, 802, 803..., which are all the same; no matter which container we access, the service ultimately provided is consistent.
However, if we place these containers on different Nodes and then use k8s, we can allocate traffic among multiple instances, i.e., load balancing.
In a Deployment, you can set the number of replicas using the .spec.replicas
field in the YAML file or with the command parameter --replicas=
.
ReplicaSet
“The purpose of a ReplicaSet is to maintain a stable set of Pod replicas that are running at any given time. Therefore, it is often used to ensure the availability of a specified number of identical Pods.”
Interested readers can check the documentation: https://kubernetes.io/zh/docs/concepts/workloads/controllers/replicaset/
Earlier, we have created nginx, and next we will modify the replica count in this deployment.
kubectl scale deployment nginx --replicas=3
Then wait a few seconds and execute kubectl get deployments
to see the result.
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 3h15m
Executing kubectl get pods -o wide
can output pod information.
NAME READY STATUS ESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-581 1/1 Running 0 3h11m 192.168.56.24 instance-2 <none> <none>
nginx-582 1/1 Running 0 3m30s 192.168.56.25 instance-2 <none> <none>
nginx-583 1/1 Running 0 3m30s 192.168.56.26 instance-2 <none> <none>
# Note: The author deleted part of the names
You can see that these pods are allocated to the instance-2
node because I only have one worker node server. If I create several more node servers, k8s will automatically allocate them to different nodes. The output also shows that each pod has its own IP address.
When we use kubectl delete xxx
to delete a pod, the Deployment will automatically maintain the three replica sets, so it will automatically start a new pod.
Executing kubectl get ep
can see the different pods exposing their ports.
NAME ENDPOINTS AGE
kubernetes 10.170.0.2:6443 28h
nginx 192.168.56.24:80,192.168.56.25:80,192.168.56.26:80 3h15m
文章评论