Kubernetes搭建

介绍

一、k8s核心组件

master:控制面

  • kube-api-server:整个集群大脑,负责接收用户任务清单,负责接收每个组件的周期询问,以及周期性的访问Etcd数据库
  • kube-scheduler:调度组件,负责决策用户的任务应调度到哪个worker节点去执行
  • kube-controller-manager:监管kubelet的任务进度以及完成情况,然后向kube-api-server定期发送报告
  • kube-proxy

worker :工作节点

  • kubelet:负责调用底层的容器运行时(runc)来编排容器
  • docker(可以由其他的容器运行时充当,racher、containerd。。。):负责启动容器

二、kubernetes平台的工作原理

图片[1]-Kubernetes搭建 – 北梦の博客-北梦の博客
  1. 用户连接kube-apiserver, 向apiserver下发创建容器的指令, 用户下发的指令当中应当明确描述我要创建的容器的个数、资源占用、网络类型、存储类型、环境变量、端口
  2. kube-apiserver收到用户请求之后,首先会对用户做身份认证以及权限验证,能过认证之后,apiserver会将用户请求的描述信息写入到etcd当中
  3. kube-scheduler会周期性的向kube-apiserver发起请求,询问是否有相关任务
  4. kube-apiserver收到kube-scheduler的请求后,会检查etcd,查看是否有任务需要调度,如果发现有,则返回给kube-scheduler
  5. kube-scheduler拿到任务之后,会再次向kube-apiserver发起请求,获取当前所有kubelet的状态,从其中找到最适合运行该任务的worker,然后将该worker信息与任务信息绑定,并返回给kube-apiserver
  6. kube-apiserver收到之后,将其存储至etcd
  7. kubelet周期性的向kube-apiserver发起请求,汇报心跳以及当前kubelet的状态(包括当前节点的cpu、内存大小以及使用率,运行的容器数等),kube-apiserver会将每个kubelet的状态信息存储至etcd。当kubelet向kube-apiserver发起请求时,kube-apiserver还会检查当前是否有需要该kubelet运行的任务。如果有,那么kube-apiserver会将任务信息发送至该kubelet
  8. kubelet收到任务之后,会按照任务描述,调用docker,运行容器
  9. kubelet将发往本机的任务执行完毕以后,会向kube-apiserver发起报告
  10. kube-apiserver收到报告信息以后,将其存储至etcd
  11. kube-contoller-manager会周期性的向kube-apiserver发起请求,询问是否有任务
  12. kube-apiserver会将用户描述的任务信息以及kubelet报告的完成的任务信息发送给kube-controller-manager
  13. kube-controller-manager对比任务是否有异常,如果没有,则确认任务完成。如果有,则将未完成的任务返回给kube-apiserver

安装

一、准备工作

准备两台linux虚拟机,配置好yum源和ip

1.修改hostname

hostnamectl set-hostname master
hostnamectl set-hostname node

2.修改hosts文件

vi /etc/hosts

添加两条解析

192.168.110.140 master
192.168.110.150 node

3.删除防火墙规则

iptables -F
iptables -t nat -F

4.关闭安全上下文

setenforce 0

配置永久关闭

sed -i '/^SELINUX=/c\SELINUX=disabled' /etc/selinux/config

5.关闭swap交换分区

注释掉带swap的行

vi  /etc/fstab
图片[2]-Kubernetes搭建 – 北梦の博客-北梦の博客

关闭交换分区

swapoff -a

6.修改内核参数

cat > /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

然后执行sysctl -p

[root@localhost ~]# sysctl -p
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录

解决方案

yum install -y iptables

modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- br_netfilter
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

modprobe br_netfilter

7.安装docker和换源

安装必要工具

yum install -y  device-mapper-persistent-data lvm2 wget vim bash-completion

配置kubectl补全功能

echo 'source <(kubectl completion bash)' >> /etc/profile
source /etc/profile
exit

重新登陆后输入kubectl describe即可确定是否具备补全功能

8.修改docker配置文件

cat > /etc/docker/daemon.json << EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": [
    "https://dh.b52m.cn"
  ],
  "bip": "192.168.100.1/24",
  "default-address-pools": [
    {
      "base": "192.168.100.0/16",
      "size": 24
    }
  ]
}
EOF

重启docker

systemctl daemon-reload
systemctl restart docker
systemctl enable docker

9.修改containerd配置文件

生成containerd 的默认配置文件

containerd config default > /etc/containerd/config.toml

修改默认配置文件

containerd config default | \
sed -e 's|registry\.k8s\.io/pause:[0-9.]\+|k8s\.b52m\.cn\/pause:3.9|g' \
    -e 's/SystemdCgroup = .*/SystemdCgroup = true/' \
    -e 's/\[plugins."io.containerd.grpc.v1.cri".registry.mirrors\]/&\n        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n          endpoint = ["https:\/\/dh\.b52m\.cn"]\n/' \
    -e 's/\[plugins."io.containerd.grpc.v1.cri".registry.mirrors\]/&\n        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]\n          endpoint = ["https:\/\/k8s\.b52m\.cn"]\n/' \
    -e '/^\s*$/d' | \
sudo tee /etc/containerd/config.toml

重启containerd

systemctl restart containerd

10.安装kubeadm、kubelet、kubectl

添加kubernetes安装源

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key
EOF

安装kubeadm、kubelet、kubectl

yum install -y kubelet-1.30.0 kubeadm-1.30.0 kubectl-1.30.0 --disableexcludes=kubernetes

设置开机自启并启动kubelet

systemctl enable kubelet --now

修改kubelet容器运行时

crictl config runtime-endpoint /var/run/containerd/containerd.sock
crictl config image-endpoint /var/run/containerd/containerd.sock
systemctl restart containerd

验证

crictl  pull nginx:latest
crictl  images

11.安装master

查看k8s版本

kubeadm version
图片[3]-Kubernetes搭建 – 北梦の博客-北梦の博客
kubeadm init --kubernetes-version=v1.30.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.110.140 --image-repository=registry.aliyuncs.com/google_containers

注意:这里–apiserver-advertise-address一定要使用你自己的master的IP

  • –kubernetes-version: 用于指定我们要安装的kubernetes的版本
  • –image-repository: 用于指定我们要拉取镜像的仓库
  • –pod-network-cidr: pod的ip段,flannel, 10.244.0.0/16

默认情况下,我们安装的kube-apiserver、kube-scheduler、kube-controller-manager、kube-proxy、etcd会全部以容器的形式安装。

图片[4]-Kubernetes搭建 – 北梦の博客-北梦の博客

出现类似上方提示初始化成功,执行下方命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

如果初始化失败可以重置初始化状态重新初始化

kubeadm reset -f
rm -rf /etc/kubernetes/*
rm -rf /var/lib/kubelet/*
rm -rf /var/lib/etcd/*

检测master节点

kubectl  get  nodes
图片[5]-Kubernetes搭建 – 北梦の博客-北梦の博客

12.node节点加入master节点

获取kubeadm加入命令

kubeadm token create --print-join-command

将获取到的命令粘贴到node节点执行

kubeadm join 192.168.110.140:6443 --token q450zh.7jrd4zoykmivzcw6 --discovery-token-ca-cert-hash sha256:8dbee2041707e3b0c3a6baaeaaaba6fc0388223b6b2727dec857cf5f3a473bc0
图片[6]-Kubernetes搭建 – 北梦の博客-北梦の博客

在master节点查询

kubectl  get  nodes
图片[7]-Kubernetes搭建 – 北梦の博客-北梦の博客

下载加载网络插件(所有节点)

wget https://down.b52m.cn/d/test/release-v3.24.2.tgz -O release-v3.24.2.tgz

解压文件

mkdir /calico
mv release-v3.24.2.tgz /calico/
cd /calico/
tar -xvf release-v3.24.2.tgz 
cd release-v3.24.2/images

导入tar包

ctr -n k8s.io images import calico-cni.tar 
ctr -n k8s.io images import calico-node.tar
ctr -n k8s.io images import calico-dikastes.tar
ctr -n k8s.io images import calico-pod2daemon.tar
ctr -n k8s.io images import calico-flannel-migration-controller.tar
ctr -n k8s.io images import calico-typha.tar
ctr -n k8s.io images import calico-kube-controllers.tar

部署Calico网络插件的命令(master节点)

cd ../manifests
kubectl apply -f calico.yaml

检测所有节点

kubectl  get  nodes
图片[8]-Kubernetes搭建 – 北梦の博客-北梦の博客
稍等一会,两个节点状态为ready

管理namespace

1.列出当前集群下所有的ns

kubectl get ns

[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   7d
kube-node-lease   Active   7d
kube-public       Active   7d
kube-system       Active   7d
  • NAME:ns的名称
  • STATUS:ns的运行状态
  • AGE:ns运行的时长

2.查看某一个ns的详细信息

kubectl describe ns $ns_name

[root@master ~]# kubectl describe ns default 
Name:         default
Labels:       kubernetes.io/metadata.name=default
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.
  • No resource quota.:未设置资源配额
  • No LimitRange resource.:未设置资源上限

3.查看某一个ns中的其他资源

kubectl get resourcename -n $ns_name

[root@master ~]# kubectl get pods -n default
NAME      READY   STATUS    RESTARTS      AGE
busybox   1/1     Running   2 (37m ago)   6h1m
centos    1/1     Running   1 (21m ago)   82m

4.创建ns

kubectl create ns $ns_name

[root@master ~]# kubectl create ns k8s-ns1
namespace/k8s-ns1 created
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   7d
k8s-ns1           Active   6s
kube-node-lease   Active   7d
kube-public       Active   7d
kube-system       Active   7d

5.删除ns

kubectl delete ns $ns_name

[root@master ~]# kubectl delete ns k8s-ns1
namespace "k8s-ns1" deleted
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   7d
kube-node-lease   Active   7d
kube-public       Active   7d
kube-system       Active   7d

管理pod

1.列出pod

kubectl get pods

[root@master ~]# kubectl get pods
NAME      READY   STATUS    RESTARTS      AGE
busybox   1/1     Running   2 (42m ago)   6h5m
centos    1/1     Running   1 (25m ago)   86m

2.创建pod

kubectl run $podname --image=$image:tag

使用–来指定容器启动时要运行的命令,并让它启动后就进入睡眠状态

[root@master ~]# kubectl run busybox --image=busybox:1.28 -- sleep 3600
pod/busybox created
[root@master ~]# kubectl get pods
NAME      READY   STATUS    RESTARTS        AGE
busybox   1/1     Running   0               9s
centos    1/1     Running   2 (7m41s ago)   2d21h

3.查看某一个pod的详细信息

kubectl describe pod $podname -n $ns_name

[root@master ~]# kubectl describe pod busybox -n default
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             node/192.168.110.150
Start Time:       Fri, 09 May 2025 13:58:36 +0800
Labels:           run=busybox
Annotations:      cni.projectcalico.org/containerID: 22527f331ca5ce4c38a9b26288e421e720574e80b1158ed7165277fc0d9ef757
                  cni.projectcalico.org/podIP: 10.244.167.137/32
                  cni.projectcalico.org/podIPs: 10.244.167.137/32
Status:           Running
IP:               10.244.167.137
IPs:
  IP:  10.244.167.137
Containers:
  busybox:
    Container ID:  containerd://470f7e09c4d9662c1941c1086992f9d58a8c86d2bc45814edafb0caab034df99
    Image:         busybox:1.28
    Image ID:      docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
    Port:          <none>
    Host Port:     <none>
    Args:
      sleep
      3600
    State:          Running
      Started:      Fri, 09 May 2025 13:58:36 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gnjd4 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-gnjd4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  56s   default-scheduler  Successfully assigned default/busybox to node
  Normal  Pulled     56s   kubelet            Container image "busybox:1.28" already present on machine
  Normal  Created    56s   kubelet            Created container busybox
  Normal  Started    56s   kubelet            Started container busybox

4.进入pod

kubectl exec -it $podname -n $ns_name -- /bin/bash

[root@master ~]# kubectl exec -it busybox -n default -- /bin/sh
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # ping 192.168.110.150
PING 192.168.110.150 (192.168.110.150): 56 data bytes
64 bytes from 192.168.110.150: seq=0 ttl=64 time=0.048 ms
64 bytes from 192.168.110.150: seq=1 ttl=64 time=0.093 ms
64 bytes from 192.168.110.150: seq=2 ttl=64 time=0.067 ms
^C
--- 192.168.110.150 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.048/0.069/0.093 ms

5.删除pod

kubectl delete pod $podname

[root@master ~]# kubectl delete pod busybox
pod "busybox" deleted
温馨提示:本文最后更新于2025-05-09 14:04:15,某些文章具有时效性,若有错误或已失效,请在下方留言或联系站长
© 版权声明
THE END
喜欢就支持一下吧
点赞1 分享
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

取消
昵称表情

    暂无评论内容