位置: IT常识 - 正文
目录
一、准备工作
二、配置
1、修改主机名
2、修改hosts文件
3、关闭防火墙和SELinux
4、关闭swap
5、修改网卡配置
6、系统模块配置
7、免密登录
8、安装k8s和docker
9、查看k8s集群需要的镜像版本
10、初始化Master节点
11、node配置
12、拉取Nginx镜像进行配置
推荐整理分享【kubernetes】k8s集群搭建(完整详解)(k8s kubelet),希望有所帮助,仅作参考,欢迎阅读内容。
文章相关热门搜索词:k8s documentation,k8s kubeproxy,k8s kubesphere,k8s中kubectl,k8s kubeproxy,k8s kubedns,kubernetes(k8s),kubernetes(k8s),内容如对您有帮助,希望把文章链接给更多的朋友!
环境基于Redhat8.5版本
1、准备三台虚拟机,IP地址为
master:192.168.10.129
node1:192.168.10.134
node2:192.168.10.136
也可以在一台上做,然后克隆另外俩台,修改主机名
二、配置1、修改主机名#在主节点的虚拟机[root@mgr1 ~]# hostnamectl set-hostname k8s-master#在node1的虚拟机[root@mgr2 ~]# hostnamectl set-hostname k8s-node1#在node2的虚拟机[root@mgr3 ~]# hostnamectl set-hostname k8s-node22、修改hosts文件[root@k8s-master ~]# vim /etc/hosts192.168.10.129 k8s-master192.168.10.134 k8s-node1192.168.10.136 k8s-node2#在node1,node2也是上面一样配置3、关闭防火墙和SELinux#三台都做[root@k8s-master ~]# setenforce 0 [root@k8s-node1 ~]# systemctl stop firewalld.service [root@k8s-node1 ~]# systemctl disable firewalld.service4、关闭swap#三台节点都做,注释掉包含swap这一行[root@k8s-master ~]# vim /etc/fstab #/dev/mapper/rhel-swap none swap defaults 0 05、修改网卡配置#三台节点都做[root@k8s-master ~]# vim /etc/sysctl.d/kubernetes.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1[root@k8s-master ~]# sysctl -p6、系统模块配置#三台节点都做[root@k8s-master ~]# modprobe br_netfilter #加载系统模块[root@k8s-master ~]# lsmod | grep br_netfilter 7、免密登录#三台都做[root@k8s-master ~]ssh-keygen [root@k8s-master ~]ssh-copy-id root@192.168.10.129[root@k8s-master ~]ssh-copy-id root@192.168.10.134[root@k8s-master ~]ssh-copy-id root@192.168.10.1368、安装k8s和docker#三个节点都需要做#配置yum源[root@k8s-master ~]# cd /etc/yum.repos.d/[root@k8s-master yum.repos.d]# cat k8s.repo [k8s]name=k8sbaseurl=http://mirrors.ustc.edu.cn/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=0[root@k8s-master yum.repos.d]# mount /dev/sr0 /mnt/[root@master yum.repos.d]# wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.reposed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repodnf install -y yum-utils device-mapper-persistent-data lvm2dnf install -y kubelet kubeadm kubectldnf remove podman -ydnf install -y docker-cednf install -y iproute-tc yum-utils device-mapper-persistent-data lvm2 kubelet-1.21.3 kubeadm-1.21.3 kubectl-1.21.3 docker-ce#初始化systemctl enable kubelet systemctl enable --now docker修改docker配置文件
[root@k8s-master ~]# systemctl start docker #docker初始化[root@k8s-master ~]# vim /etc/docker/daemon.json{ "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://8zs3633v.mirror.aliyuncs.com"]}[root@k8s-master ~]# systemctl restart docker[root@k8s-master ~]# systemctl enable docker.service 9、查看k8s集群需要的镜像版本注意:三个节点都需要做
[root@k8s-master yum.repos.d]# kubeadm config images list再拉取这些镜像,执行一下步骤
docker pull kittod/kube-apiserver:v1.21.3docker pull kittod/kube-controller-manager:v1.21.3docker pull kittod/kube-scheduler:v1.21.3docker pull kittod/kube-proxy:v1.21.3docker pull kittod/pause:3.4.1docker pull kittod/etcd:3.4.13-0docker pull kittod/coredns:v1.8.0docker pull kittod/flannel:v0.14.0拉取完成后执行改变coredns的标记不然后面会找不到镜像,执行一下步骤
docker tag kittod/kube-apiserver:v1.21.3 k8s.gcr.io/kube-apiserver:v1.21.3docker tag kittod/kube-controller-manager:v1.21.3 k8s.gcr.io/kube-controller-manager:v1.21.3docker tag kittod/kube-scheduler:v1.21.3 k8s.gcr.io/kube-scheduler:v1.21.3docker tag kittod/kube-proxy:v1.21.3 k8s.gcr.io/kube-proxy:v1.21.3docker tag kittod/pause:3.4.1 k8s.gcr.io/pause:3.4.1docker tag kittod/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0docker tag kittod/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0docker tag kittod/flannel:v0.14.0 quay.io/coreos/flannel:v0.14.010、初始化Master节点kubeadm init \--kubernetes-version=v1.21.3 \--pod-network-cidr=10.244.0.0/16 \--service-cidr=10.96.0.0/12 \--apiserver-advertise-address=192.168.10.129 \如果出现这个错误:
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
加上这个 --ignore-preflight-errors=all
如果还不成功: systemctl stop kubelet rm -rf /etc/kubernetes/* systemctl stop docker 如果停止失败 reboot docker container prune docer ps -a 如果没有容器,就说明删干净了 rm -rf /var/lib/kubelet/ rm -rf /var/lib/etcd
初始化成功后会出现以下内容
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.10.129:6443 --token xwvae1.05gyglinbz62ui3i \ --discovery-token-ca-cert-hash sha256:f2701ff3276b5c260900314f3871ba5593107809b62d741c05f452caad62ffa8
根据提示,操作
[root@k8s-master ~]# mkdir -p $HOME/.kube[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf11、node配置#俩node节点上做[root@k8s-node1 ~]# kubeadm join 192.168.10.129:6443 --token xwvae1.05gyglinbz62ui3i --discovery-token-ca-cert-hash sha256:f2701ff3276b5c260900314f3871ba5593107809b62d741c05f452caad62ffa8如果加入失败: 1、kubeadm reset -y 2、 rm -rf /etc/kubernetes/kubelet.conf rm -rf /etc/kubernetes/pki/ca.crt systemctl restart kubelet
1) 在master上查看节点状态
[root@k8s-master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master NotReady control-plane,master 3m48s v1.21.3k8s-node1 NotReady <none> 3m9s v1.21.3k8s-node2 NotReady <none> 3m6s v1.21.3发现此时是notready
2)查看集群pod状态
[root@k8s-master ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-558bd4d5db-46dzt 0/1 Pending 0 4m54scoredns-558bd4d5db-vpqgl 0/1 Pending 0 4m54setcd-k8s-master 1/1 Running 0 5m7skube-apiserver-k8s-master 1/1 Running 0 5m7skube-controller-manager-k8s-master 1/1 Running 0 5m7skube-proxy-bjxgt 1/1 Running 0 4m32skube-proxy-bmjnz 1/1 Running 0 4m54skube-proxy-z6jzl 1/1 Running 0 4m29skube-scheduler-k8s-master 1/1 Running 0 5m7s3)再看节点上查看日志
[root@k8s-master ~]# journalctl -f -u kubelet-- Logs begin at Fri 2022-09-30 16:34:04 CST. --Sep 30 17:57:25 k8s-master kubelet[23732]: E0930 17:57:25.097653 23732 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"Sep 30 17:57:28 k8s-master kubelet[23732]: I0930 17:57:28.035887 23732 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"Sep 30 17:57:30 k8s-master kubelet[23732]: E0930 17:57:30.104181 23732 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"出现了个错误
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
解决问题: 它说网络未准备好,缺少一个网络插件,那就安装一个
#在master上输入kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml再一次检查
[root@k8s-master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master Ready control-plane,master 45h v1.21.3k8s-node1 Ready <none> 45h v1.21.3k8s-node2 Ready <none> 45h v1.21.3master和node都ready了
12、拉取Nginx镜像进行配置注意:只在master上做
docker pull nginx#重新标记docker tag nginx:latest kittod/nginx:1.21.5创建部署
kubectl create deployment nginx --image=kittod/nginx:1.21.5暴露端口
kubectl expose deployment nginx --port=80 --type=NodePort查看pod和服务
[root@k8s-master ~]# kubectl get pods,serviceNAME READY STATUS RESTARTS AGEpod/nginx-8675954f95-cvkvz 1/1 Running 0 2m20sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46hservice/nginx NodePort 10.98.220.104 <none> 80:30288/TCP 2m10s查看映射的随机端口
[root@k8s-master ~]# netstat -lntup | grep kube-proxytcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 3834/kube-proxy tcp 0 0 0.0.0.0:30288 0.0.0.0:* LISTEN 3834/kube-proxy tcp6 0 0 :::10256 :::* LISTEN 3834/kube-proxy测试Nginx服务
完成。
上一篇:go语言入门-一文带你掌握go语言函数(go语言入门指南)
下一篇:【前端vue2面试题】2023前端最新版vue2模块,高频24问(前端vue面试题)
友情链接: 武汉网站建设