完全手工搭建Kubernets集群环境,本文中会一步一步详细展示整个过程, 包含环境、软件版本、详细的步骤
服务器说明
- Kubernets Version
- v1.22.15
- 节点要求
- 节点数 >= 3台
- CPUs >= 2
- Memory >= 2G
- 修改时区
- 有的系统时区不匹配,需要修改
timedatectl set-timezone Asia/Shanghai
- 环境说明
系统类型 | IP地址 | 节点角色 | CPU | Memory | Hostname |
---|---|---|---|---|---|
CentOS-7.9 | 192.168.200.11 | master | >=2 | >=2G | cluster1 |
CentOS-7.9 | 192.168.200.22 | master,worker | >=2 | >=2G | custer2 |
CentOS-7.9 | 192.168.200.33 | worker | >=2 | >=2G | cluster3 |
- 使用
Vagrant
搭建虚拟机节点
- Vagrant:
latest
- VirtualBox:
7.0
- vagrant-vbguest:
0.21
(挂载host
和guest
同步目录)
vagrant plugin install vagrant-vbguest --plugin-version 0.21
Vagrantfile
配置文件如下:
# -*- mode: ruby -*-
# vi: set ft=ruby :
nodes = [
{
:name => "cluster1",
:eth1 => "192.168.200.11",
:mem => "4096",
:cpu => "2"
},
{
:name => "cluster2",
:eth1 => "192.168.200.22",
:mem => "4096",
:cpu => "2"
},
{
:name => "cluster3",
:eth1 => "192.168.200.33",
:mem => "4096",
:cpu => "2"
},
]
Vagrant.configure("2") do |config|
# Every Vagrant development environment requires a box.
config.vm.box = "centos/7"
nodes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", opts[:mem]]
v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
end
#config.ssh.username = "root"
#config.ssh.private_key_path = "/Users/jinpeng.d/.ssh/id_rsa"
config.vm.synced_folder "../share", "/vagrant_data"
config.vm.network :public_network, ip: opts[:eth1]
config.vm.synced_folder "../share", "/vagrant_data"
end
end
end
系统设置(所有节点)
- 所有操作需要
root
权限 hostname
(/etc/hosts
)- 安装依赖包
yum update -y
yum install -y socat conntrack ipvsadm ipset jq sysstat curl iptables libseccomp yum-utils
- 关闭防火墙,
selinux
,swap
,重置iptables
# 1. 关闭selinux
setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
# 2. 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 3. 设置ipttables规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 4. 关闭swap
vi /etc/fstab
# 永久禁用注释掉swap
#/swapfile none swap defaults 0 0
# 临时禁用
swapoff -a
# 这里两者都用,临时修改可以即时生效,不用重启,永久禁用防止重启后不生效
# 5. 关闭dnsmasq(否则无法解析域名)
service dnsmasq stop && systemctl disable dnsmasq
kubernetes
参数设置
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
EOF
# 生效文件
sysctl -p /etc/sysctl.d/kubernetes.conf
- 配置免密登录
选择其中一个节点,或者一个单独的机器生成
ssh
公秘钥对,把公钥放在k8s
所有节点服务器上
# 生成公秘钥对, 如果没有可用的
ssh-keygen -t rsa
# 查看公钥内容
cat ~/.ssh/id_rsa.pub
# 每一台节点机器上配置
echo "<pubkey content>" >> ~/.ssh/authorized_keys
- 配置
IP
映射(每个节点)
cat > /etc/hosts <<EOF
192.168.200.11 cluster1
192.168.200.22 cluster2
192.168.200.33 cluster3
EOF
- 下载
k8s
组件包
export VERSION=v1.22.15
# 下载master节点组件
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubectl
# 下载worker节点组件
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-proxy
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubelet
# 下载etcd组件
wget https://github.com/etcd-io/etcd/releases/download/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz
tar -xvf etcd-v3.4.10-linux-amd64.tar.gz
mv etcd-v3.4.10-linux-amd64/etcd* .
rm -fr etcd-v3.4.10-linux-amd64*
- 分发软件包
# 把master相关组件分发到master节点
MASTERS=(cluster1 cluster2)
for instance in ${MASTERS[@]}; do
scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@${instance}:/usr/local/bin/
done
# 把worker先关组件分发到worker节点
WORKERS=(cluster2 cluster3)
for instance in ${WORKERS[@]}; do
scp kubelet kube-proxy root@${instance}:/usr/local/bin/
done
# 把etcd组件分发到etcd节点
ETCDS=(cluster1 cluster2 cluster3)
for instance in ${ETCDS[@]}; do
scp etcd etcdctl root@${instance}:/usr/local/bin/
done
生成证书
- 准备工作
- 安装
cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl*
-
kubernetes集群组件
-
服务端组件:
kube-apiserver
kube-controller-manager
kube-scheduler
kubectl
container runtime
(containerd)
-
客户端组件:
kubelet
kube-proxy
container runtime
(containerd)
-
- 生成根证书
根证书是集群所有节点共享的,只要创建一个
CA
证书,后续创建的所有证书都由它签名。 在可以登录到所有节点的控制 台机器上创建pki
目录存放证书
- 根证书配置文件创建
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "876000h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
- 生成证书和密钥
生成证书和私钥, ca.pem
是证书, ca-key.pem
是证书私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
输出文件:
ca.pem
ca.csr
ca-key.pem
admin
客户端证书
admin
客户端证书配置文件
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "seven"
}
]
}
EOF
- 生成
admin
客户端证书和私钥(基于根证书和私钥以及证书配置文件)
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin