[kubernetes]基于版本v1.28.5+containerd + helm 搭建集群

2023-12-21 21:47:32
0 环境准备
  • 节点数量: 3 台虚拟机 centos7
  • 硬件配置: 2G或更多的RAM,2个CPU或更多的CPU,硬盘至少30G 以上
  • 网络要求: 多个节点之间网络互通,每个节点能访问外网
1 集群规划
  • k8s-node1:10.0.0.32
  • k8s-node2:10.0.3.231
  • k8s-node3:10.0.1.149
2 设置主机名
hostnamectl set-hostname k8s-node1  
hostnamectl set-hostname k8s-node2
hostnamectl set-hostname k8s-node3
3 同步 hosts 文件

如果 DNS 不支持主机名称解析,还需要在每台机器的 /etc/hosts 文件中添加主机名和 IP 的对应关系:

cat >> /etc/hosts <<EOF
10.0.0.32 k8s-node1
10.0.3.231 k8s-node2
10.0.1.149 k8s-node3
EOF
4 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
5 关闭 SELINUX

注意: ARM 架构请勿执行,执行会出现 ip 无法获取问题!

setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
6 关闭 swap 分区
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
7 同步时间
yum install ntpdate -y
ntpdate time.windows.com

modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile
tee /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
8 安装 containerd
wget https://github.com/containerd/containerd/releases/download/v1.7.3/cri-containerd-1.7.3-linux-amd64.tar.gz
tar xf cri-containerd-1.7.11-linux-amd64.tar.gz  -C /
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
vim /etc/containerd/config.toml # 修改配置文件
  sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.9" 
# 开机启动
systemctl enable --now containerd
# 版本验证
containerd --version

8.1 安装libseccomp
wget https://github.com/opencontainers/runc/releases/download/v1.1.5/libseccomp-2.5.4.tar.gz
tar xf libseccomp-2.5.4.tar.gz
cd libseccomp-2.5.4/
yum install gperf -y
./configure
make && make install
find / -name "libseccomp.so"
8.1 安装runc
wget https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
chmod +x runc.amd64
#查找containerd安装时已安装的runc所在的位置,然后替换
which runc
#替换containerd安装已安装的runc
mv runc.amd64 /usr/local/sbin/runc
#执行runc命令,如果有命令帮助则为正常
runc
  • 如果运行runc命令时提示:runc: error while loading shared libraries: libseccomp.so.2: cannot open shared object file: No such file or directory,则表明runc没有找到libseccomp,需要检查libseccomp是否安装,本次安装默认就可以查询到。
8.2 安装docker
  • 配置docker 加速
mkdir /etc/docker/
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://q3rmdln3.mirror.aliyuncs.com"],
  "insecure-registries":["http://192.168.100.20:5000"]
}
EOF
  • 安装docker
yum -y install yun-utils device-mapper-persistent-data lvm2
yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast

yum list docker-ce showduplicates | sort -r
yum install docker-ce -y
systemctl enable docker && systemctl start docker
docker --version
9 添加源
  • 查看源
$ yum repolist
  • 备分本地源
mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/bak/
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  • 获取阿里yum源配置
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  • 配置 kubernetes 源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF
  • 将 sandbox_image 镜像源设置为阿里云 google_containers 镜像源(所有节点
# 导出默认配置,config.toml这个文件默认是不存在的
containerd config default > /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
sudo sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g"  /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
  • 配置 containerd cgroup 驱动程序 systemd(所有节点)
    • kubernets 自v 1.24.0 后,就不再使用 docker.shim,替换采用 containerd 作为容器运行时端点。因此需要安装 containerd(在 docker 的基础下安装),上面安装 docker 的时候就自动安装了 containerd 了。这里的 docker 只是作为客户端而已。容器引擎还是 containerd
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
# 应用所有更改后,重新启动containerd
systemctl restart containerd
  • 更新batch
yum clean all # 清除系统所有的yum缓存 
yum makecache # 生成yum缓存
11 安装 k8s
# 安装最新版本
$ yum install -y kubelet kubeadm kubectl

# 指定版本安装
# yum install -y kubelet-1.26.0 kubectl-1.26.0 kubeadm-1.26.0

# 启动 kubelet
$ sudo systemctl enable kubelet && sudo systemctl start kubelet && sudo systemctl status kubelet
12 初始化集群
  • 注意: 初始化 k8s 集群仅仅需要再在 master 节点进行集群初始化!
kubeadm init \
  --apiserver-advertise-address=10.0.0.32 \
  --image-repository registry.aliyuncs.com/google_containers \
  --service-cidr=10.96.0.0/16 \
  --pod-network-cidr=10.244.0.0/16 \
  --v=5
11.3 集群配置文件
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/root/.kube/config
12.1 node加入集群
kubeadm join 10.0.0.32:6443 --token sorvas.aogvsfw5ok3n7agc \
        --discovery-token-ca-cert-hash sha256:fa4449876b266e9767a47deee6ba1eec0dc3532f62a1c9dffcd543639cbf696c \
        --ignore-preflight-errors=all \
        --cri-socket unix:///var/run/containerd/containerd.sock
13 配置集群网络
  • 方式0
# Needs manual creation of namespace to avoid helm error
kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged

helm repo add flannel https://flannel-io.github.io/flannel/
helm install flannel --set podCidr="10.244.0.0/16" --namespace kube-flannel flannel/flannel

最后的效果

[root@k8s-node1 k8s]# kubectl get pod -A
NAMESPACE      NAME                                READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-ffvvm               1/1     Running   0          18m
kube-flannel   kube-flannel-ds-g4n6k               1/1     Running   0          18m
kube-flannel   kube-flannel-ds-l2f4b               1/1     Running   0          18m
kube-system    coredns-66f779496c-lrzwk            1/1     Running   0          20m
kube-system    coredns-66f779496c-mtdx5            1/1     Running   0          20m
kube-system    etcd-k8s-node1                      1/1     Running   8          20m
kube-system    kube-apiserver-k8s-node1            1/1     Running   5          20m
kube-system    kube-controller-manager-k8s-node1   1/1     Running   2          20m
kube-system    kube-proxy-m7z2m                    1/1     Running   0          19m
kube-system    kube-proxy-mv8p8                    1/1     Running   0          19m
kube-system    kube-proxy-zvfdg                    1/1     Running   0          20m
kube-system    kube-scheduler-k8s-node1            1/1     Running   6          20m
[root@k8s-node1 k8s]# kubectl get node -owide
NAME        STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                 CONTAINER-RUNTIME
k8s-node1   Ready    control-plane   21m   v1.28.2   10.0.0.32     <none>        CentOS Linux 7 (Core)   3.10.0-1160.102.1.el7.x86_64   containerd://1.7.11
k8s-node2   Ready    <none>          20m   v1.28.2   10.0.3.231    <none>        CentOS Linux 7 (Core)   3.10.0-1160.102.1.el7.x86_64   containerd://1.7.11
k8s-node3   Ready    <none>          20m   v1.28.2   10.0.1.149    <none>        CentOS Linux 7 (Core)   3.10.0-1160.102.1.el7.x86_64   containerd://1.7.11
[root@k8s-node1 k8s]# 

文章来源:https://blog.csdn.net/weixin_38805083/article/details/135139767
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。