# 实验室搭建 K8s 教程
-
配置系统 centos7 ,安装 k8s 版本 1.26.3 docker:v23.0.1
-
主机详细信息
VM 主机名 系统 docker k8s 10.0.21.215 master centos7(64) v23.0.1 v1.26.3 10.0.21.216 node1 centos7(64) v23.0.1 v1.26.3 10.0.21.217 node2 centos7(64) v23.0.1 v1.26.3 10.0.21.218 node3 centos7(64) v23.0.1 v1.26.3 主机名:利用
hostnamectl
去设置主机名-
(每台执行)安装镜像过程不在叙述这里只需要注意安装之后
<u>
访问不了互联网</u>
问题?因为默认是不可以访问互联网的,所以需要去设定一个
ONBOOT=yes
,此外安装k8s
最好选择静态 IP(因为集群需要共享节点信息等)。-
首先在
etc/sysconfig/network-scripts/
找到网卡<u>
ifcfg - 你的网卡名字</u>
(我这里是ifcfg-ens192
) -
然后
vi ifcfg-ens192
进去修改网卡配置,这里采用静态 IP 方式,具体修改选项如下:ONBOOT=yes
IPADDR=10.0.21.215
NATMASK=255.255.255.0
GATEWAY=10.0.21.254
DNS1=8.8.8.8
-
再去修改
/etc/sysconfig/network
里面的内容NETWORKING=yes
GATEWAY=10.0.21.254
-
最后重启网络服务即可
service network restart
-
-
(每台执行)这个时候镜像可以访问网络了,那么需要对 centos7 镜像 tools 等的安装 (
gcc glibc gcc-c++ make net-tools screen vim lrzsz tree dos2unix lsof tcpdump bash-completion ntp
)#安装 wget
[root@localhost ~]# yum -y install wget
#备份 CentOS-Base.repo
[root@localhost ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
#下载新的 CentOS-Base.repo 到 /etc/yum.repos.d/
[root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
#生成缓存
[root@localhost ~]# yum makecache
[root@localhost ~]# yum update
#安装必要软件
[root@localhost ~]# yum -y install gcc glibc gcc-c++ make net-tools screen vim lrzsz tree dos2unix lsof tcpdump bash-completion ntp
-
(分别执行)设置主机名字,四个主机(看文主机详细信息),分别是
master,node1,node2,node3
-
(每台执行)
hosts文件
设置:host 文件作用 hosts 文件简析_host 格式_Finder_Way 的博客 - CSDN 博客](https://blog.csdn.net/hongkaihua1987/article/details/111214343?ops_request_misc={"request_id"%3A"167982069116782427494208"%2C"scm"%3A"20140713.130102334.."}&request_id=167982069116782427494208&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2allsobaiduend~default-4-111214343-null-null.142v76wechat,201v4add_ask,239v2insert_chatgpt&utm_term=hosts 文件 & spm=1018.2226.3001.4187)) 设置如下:每台都执行
cat >> /etc/hosts <<EOF
10.0.21.215 master
10.0.21.216 node1
10.0.21.217 node2
10.0.21.218 node3
EOF
-
(每台执行)关闭防火墙和 selinux
<u>
推荐永久关闭(否则重启报错)</u>
,具体代码执行如下:三台都执行,以下无特殊说明,则三台都执行
systemctl stop firewalld
systemctl disable firewalld
# 临时禁用 selinux
setenforce 0
# 永久关闭 selinux 继续运行下面
# 修改 /etc/sysconfig/selinux 文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
-
(每台执行)关闭 swap 分区
swapoff -a //临时关闭
vi /etc/fstab //永久关闭(注释掉最后一条配置)
...
#/dev/mapper/centos-swap swap swap defaults 0 0
free -h | grep Swap //验证Swap关闭情况(显示0代表成功关闭)
Swap: 0B 0B 0B
-
(每台执行)各个节点内核调节
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0 #禁止适用swap空间
EOF
sysctl --system
-
(每台执行)下面就是安装 docker 工具了
-
安装 yum 仓库管理工具
yum makecache //更新yum软件包索引
yum -y install yum-utils
-
安装阿里的 docker-ce 仓库
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#或则 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
#可以去 cd /etc/yum.repos.d/ 删除对应的 docker repo 即可
-
安装需要的软件包, yum-util 提供 yum-config-manager 功能,另外两个是 devicemapper 驱动依赖的
yum install -y yum-utils device-mapper-persistent-data lvm2
#可能会报错,没有就算了
#failure: repodata/repomd.xml from mirrors.aliyun.com_docker-ce_linux_centos_docker-ce: [Errno 256] No more mirrors to try.
#运行 yum-config-manager --save --setopt=mirrors.aliyun.com_docker-ce_linux_centos_docker-ce.skip_if_unavailable=true
-
设置 yum 源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
-
可以查看所有仓库中所有 docker 版本,并选择特定版本安装
yum list docker-ce --showduplicates | sort -r
-
安装 docker
yum install docker-ce docker-ce-cli containerd.io
#yum install -y docker-ce-3:20.10.9-3.el7
-
启动并加入开机启动
systemctl start docker
systemctl enable docker
-
验证 docker 是安装成功
docker version
docker info
-
配置镜像加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors" : [
"https://jkfdsf2u.mirror.aliyuncs.com",
"https://registry.docker-cn.com"
],
"insecure-registries" : [
"docker-registry.zjq.com"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "10"
},
"data-root": "/data/docker"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
-
-
(每台执行,例外初始化)安装
kubeadm
- 添加阿里云
K8s
的 yum 源
cat >> /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
EOF
-
安装
Kubeadm、Kubelet、Kubectl
组件yum list |grep kubeadm#查看版本
yum -y install kubelet-1.26.3-0 kubeadm-1.26.3-0 kubectl-1.26.3-0
systemctl enable kubelet && systemctl start kubelet
-
(仅 master 节点)始化
Kubernetes Master
kubeadm init --kubernetes-version=v1.26.3 \
--apiserver-advertise-address=10.0.21.215 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16\
--cri-socket unix:///var/run/cri-dockerd.sock \
--ignore-preflight-errors=all
#可能会报错 [ERROR CRI]: container runtime is not running,解决:https://blog.csdn.net/qq_43580215/article/details/125153959
#如果有其他错误看 11 点纠错即可。
- image-repository 是镜像源,由于 kubeadm 默认从官网 http://k8s.grc.io 下载所需镜像,国内无法访问,因此需要通过 image-repository 指定阿里云镜像仓库地址。
- apiserver-advertise-address 是 Master 机的 IP 地址,在本例中是 10.0.21.215
- pod-network-cidr 是 POD 的网段,本例中将其设为 10.244.0.0/16,可自行修改。
-
得到加入密钥即可(先不要执行,因为从节点还没有配置
admin.conf
): -
kubeadm join 10.0.21.215:6443 --token wy4b2n.k0mkkevck8devjm8 \
--discovery-token-ca-cert-hash sha256:50ffa9744ca72782901e83341f6458164e1b7bc9032015176f5014a69207add4
注意 :从节点需要加上
--cri-socket unix:///var/run/cri-dockerd.sock
,因为没有整合 kubelet 和 cri-dockerd 所以需要。
- 添加阿里云
-
这还需要配置从节点的 kubectl,kubectl 命令需要使用 kubernetes-admin 来运行,需要 admin.conf 文件;而 admin.conf 文件是通过 “kubeadmin init” 命令在 /etc/kubernetes 中创建的,从节点没有该配置文件;因此需要将 admin.conf 复制到从节点 相同目录下面。
-
#文件复制(在主节点操作,利用 scp 命令)
scp /etc/kubernetes/admin.conf root@10.0.21.216:/etc/kubernetes/
#然后输入 10.0.21.216 密码即可
#Warning: Permanently added '10.0.21.218' (ECDSA) to the list of known hosts.
#root@10.0.21.218's password: *******
同样的操作把10.0.21.217、10.0.21.218配置即可
#当每个 Node 都有 admin.conf 后,分别运行下面配置环境变量的代码
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
-
-
以上操作弄完之后就可以把从节点加入主节点了
#每个从节点都需要加入
kubeadm join 10.0.21.215:6443 --token wy4b2n.k0mkkevck8devjm8 \
--discovery-token-ca-cert-hash sha256:50ffa9744ca72782901e83341f6458164e1b7bc9032015176f5014a69207add4 --cri-socket unix:///var/run/cri-dockerd.sock
未加入前 kubectl get nodes 如下:
加入节点之后:(状态为 NotReady,需要配置网络)
# 11 节为 K8s 纠错,,如果没有就跳转 12 节
-
纠错:
-
bash error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailabl
解决方案:kubeadm reset
-
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.解决方案需要安装cri-dockerd(所有节点) ```bash #从https://github.com/Mirantis/cri-dockerd/releases中下载最新的rpm包,手动下载后上传到服务器里 rpm -ivh cri-dockerd-0.3.1-3.el7.x86_64.rpm #修改/usr/lib/systemd/system/cri-docker.service文件中的ExecStart配置 vim /usr/lib/systemd/system/cri-docker.service ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 systemctl daemon-reload systemctl enable --now cri-docker
之后在运行下面代码,初始化即可。
bash kubeadm init \ --apiserver-advertise-address=10.0.21.215 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.26.3 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --cri-socket unix:///var/run/cri-dockerd.sock \ --ignore-preflight-errors=all
# 到目前位置就安装完 k8s 了
# 下面进行网络等初始化配置:
-
-
使用 wget 命令,地址为:(https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml),这个地址国内访问不了,所以我把内容复制下来,为了避免前面文章过长,我把它粘贴到文章末尾了。这个 yml 配置文件中配置了一个国内无法访问的地址 (quay.io),我已经将其改为国内可以访问的地址 (quay-mirror.qiniu.com)。我们新建一个 kube-flannel.yml 文件,复制粘贴该内容即可。
之后运行:
kubectl apply -f kube-flannel.yml |
# 下面进行一个小实验体会一下 k8s:
-
这里就简单做一个 nginx 实验即可
-
首先你需要下载一个 yaml 文件
wget https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml
如果想 vim yaml 文件,可以看到里面有 Service 、 Deployment,
三份副本
-
然后:
kubectl apply -f nginx-app.yaml
可以看到
kubectl
帮建立了一个service
和deployment
,deployment
里面replicas: 3
,所以得到 3 个 pod。 -
然后在相当于服务暴露到
10.0.21.215:31497
,10.0.21.216:31497
,10.0.21.217:31497
,10.0.21.218:31497
去浏览器访问即可。
如下图:
-
# 附录
- kube-flannel.yml
--- | |
kind: Namespace | |
apiVersion: v1 | |
metadata: | |
name: kube-flannel | |
labels: | |
pod-security.kubernetes.io/enforce: privileged | |
--- | |
kind: ClusterRole | |
apiVersion: rbac.authorization.k8s.io/v1 | |
metadata: | |
name: flannel | |
rules: | |
- apiGroups: | |
- "" | |
resources: | |
- pods | |
verbs: | |
- get | |
- apiGroups: | |
- "" | |
resources: | |
- nodes | |
verbs: | |
- get | |
- list | |
- watch | |
- apiGroups: | |
- "" | |
resources: | |
- nodes/status | |
verbs: | |
- patch | |
- apiGroups: | |
- "networking.k8s.io" | |
resources: | |
- clustercidrs | |
verbs: | |
- list | |
- watch | |
--- | |
kind: ClusterRoleBinding | |
apiVersion: rbac.authorization.k8s.io/v1 | |
metadata: | |
name: flannel | |
roleRef: | |
apiGroup: rbac.authorization.k8s.io | |
kind: ClusterRole | |
name: flannel | |
subjects: | |
- kind: ServiceAccount | |
name: flannel | |
namespace: kube-flannel | |
--- | |
apiVersion: v1 | |
kind: ServiceAccount | |
metadata: | |
name: flannel | |
namespace: kube-flannel | |
--- | |
kind: ConfigMap | |
apiVersion: v1 | |
metadata: | |
name: kube-flannel-cfg | |
namespace: kube-flannel | |
labels: | |
tier: node | |
app: flannel | |
data: | |
cni-conf.json: | | |
{ | |
"name": "cbr0", | |
"cniVersion": "0.3.1", | |
"plugins": [ | |
{ | |
"type": "flannel", | |
"delegate": { | |
"hairpinMode": true, | |
"isDefaultGateway": true | |
} | |
}, | |
{ | |
"type": "portmap", | |
"capabilities": { | |
"portMappings": true | |
} | |
} | |
] | |
} | |
net-conf.json: | | |
{ | |
"Network": "10.242.0.0/16", | |
"Backend": { | |
"Type": "vxlan" | |
} | |
} | |
--- | |
apiVersion: apps/v1 | |
kind: DaemonSet | |
metadata: | |
name: kube-flannel-ds | |
namespace: kube-flannel | |
labels: | |
tier: node | |
app: flannel | |
spec: | |
selector: | |
matchLabels: | |
app: flannel | |
template: | |
metadata: | |
labels: | |
tier: node | |
app: flannel | |
spec: | |
affinity: | |
nodeAffinity: | |
requiredDuringSchedulingIgnoredDuringExecution: | |
nodeSelectorTerms: | |
- matchExpressions: | |
- key: kubernetes.io/os | |
operator: In | |
values: | |
- linux | |
hostNetwork: true | |
priorityClassName: system-node-critical | |
tolerations: | |
- operator: Exists | |
effect: NoSchedule | |
serviceAccountName: flannel | |
initContainers: | |
- name: install-cni-plugin | |
image: docker.io/flannel/flannel-cni-plugin:v1.1.2 | |
#image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2 | |
command: | |
- cp | |
args: | |
- -f | |
- /flannel | |
- /opt/cni/bin/flannel | |
volumeMounts: | |
- name: cni-plugin | |
mountPath: /opt/cni/bin | |
- name: install-cni | |
image: docker.io/flannel/flannel:v0.21.4 | |
#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.4 | |
command: | |
- cp | |
args: | |
- -f | |
- /etc/kube-flannel/cni-conf.json | |
- /etc/cni/net.d/10-flannel.conflist | |
volumeMounts: | |
- name: cni | |
mountPath: /etc/cni/net.d | |
- name: flannel-cfg | |
mountPath: /etc/kube-flannel/ | |
containers: | |
- name: kube-flannel | |
image: docker.io/flannel/flannel:v0.21.4 | |
#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.4 | |
command: | |
- /opt/bin/flanneld | |
args: | |
- --ip-masq | |
- --kube-subnet-mgr | |
resources: | |
requests: | |
cpu: "100m" | |
memory: "50Mi" | |
securityContext: | |
privileged: false | |
capabilities: | |
add: ["NET_ADMIN", "NET_RAW"] | |
env: | |
- name: POD_NAME | |
valueFrom: | |
fieldRef: | |
fieldPath: metadata.name | |
- name: POD_NAMESPACE | |
valueFrom: | |
fieldRef: | |
fieldPath: metadata.namespace | |
- name: EVENT_QUEUE_DEPTH | |
value: "5000" | |
volumeMounts: | |
- name: run | |
mountPath: /run/flannel | |
- name: flannel-cfg | |
mountPath: /etc/kube-flannel/ | |
- name: xtables-lock | |
mountPath: /run/xtables.lock | |
volumes: | |
- name: run | |
hostPath: | |
path: /run/flannel | |
- name: cni-plugin | |
hostPath: | |
path: /opt/cni/bin | |
- name: cni | |
hostPath: | |
path: /etc/cni/net.d | |
- name: flannel-cfg | |
configMap: | |
name: kube-flannel-cfg | |
- name: xtables-lock | |
hostPath: | |
path: /run/xtables.lock | |
type: FileOrCreate |