本文将详细介绍如何从零开始手动搭建Kubernetes集群环境。我们将一步一步展示整个过程,包括环境准备、软件版本选择、详细步骤和配置说明。
服务器要求
Kubernetes版本
- 版本: v1.22.15
节点要求
- 节点数: 至少3台
- CPU: 至少2个
- 内存: 至少2G
修改时区
某些系统的时区可能不匹配,需要进行修改:
timedatectl set-timezone Asia/Shanghai
环境说明
系统类型 | IP地址 | 节点角色 | CPU | 内存 | 主机名 |
---|---|---|---|---|---|
CentOS-7.9 | 192.168.200.11 | master | >=2 | >=2G | cluster1 |
CentOS-7.9 | 192.168.200.22 | master,worker | >=2 | >=2G | cluster2 |
CentOS-7.9 | 192.168.200.33 | worker | >=2 | >=2G | cluster3 |
使用Vagrant搭建虚拟机节点
- Vagrant: 最新版本
- VirtualBox: 7.0
- vagrant-vbguest: 0.21 (用于挂载host和guest同步目录)
vagrant plugin install vagrant-vbguest --plugin-version 0.21
Vagrantfile配置
# -*- mode: ruby -*-
# vi: set ft=ruby :
nodes = [
{
:name => "cluster1",
:eth1 => "192.168.200.11",
:mem => "4096",
:cpu => "2"
},
{
:name => "cluster2",
:eth1 => "192.168.200.22",
:mem => "4096",
:cpu => "2"
},
{
:name => "cluster3",
:eth1 => "192.168.200.33",
:mem => "4096",
:cpu => "2"
},
]
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
nodes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", opts[:mem]]
v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
end
config.vm.synced_folder "../share", "/vagrant_data"
config.vm.network :public_network, ip: opts[:eth1]
end
end
end
系统设置(所有节点)
需要root权限
- 设置
hostname
(/etc/hosts
) - 安装依赖包
yum update -y
yum install -y socat conntrack ipvsadm ipset jq sysstat curl iptables libseccomp yum-utils
- 关闭防火墙,
selinux
,swap
, 重置iptables
# 关闭selinux
setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 设置iptables规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap
vi /etc/fstab
# 永久禁用注释掉swap
#/swapfile none swap defaults 0 0
# 临时禁用
swapoff -a
# 关闭dnsmasq
service dnsmasq stop && systemctl disable dnsmasq
- Kubernetes参数设置
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
EOF
sysctl -p /etc/sysctl.d/kubernetes.conf
- 配置免密登录
# 生成公秘钥对, 如果没有可用的
ssh-keygen -t rsa
# 查看公钥内容
cat ~/.ssh/id_rsa.pub
# 每一台节点机器上配置
echo "<pubkey content>" >> ~/.ssh/authorized_keys
- 配置IP映射(每个节点)
cat > /etc/hosts <<EOF
192.168.200.11 cluster1
192.168.200.22 cluster2
192.168.200.33 cluster3
EOF
- 下载K8s组件包
export VERSION=v1.22.15
# 下载master节点组件
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubectl
# 下载worker节点组件
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-proxy
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubelet
# 下载etcd组件
wget https://github.com/etcd-io/etcd/releases/download/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz
tar -xvf etcd-v3.4.10-linux-amd64.tar.gz
mv etcd-v3.4.10-linux-amd64/etcd* .
rm -fr etcd-v3.4.10-linux-amd64*
- 分发软件包
# 把master相关组件分发到master节点
MASTERS=(cluster1 cluster2)
for instance in ${MASTERS[@]}; do
scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@${instance}:/usr/local/bin/
done
# 把worker相关组件分发到worker节点
WORKERS=(cluster2 cluster3)
for instance in ${WORKERS[@]}; do
scp kubelet kube-proxy root@${instance}:/usr/local/bin/
done
# 把etcd组件分发到etcd节点
ETCDS=(cluster1 cluster2 cluster3)
for instance in ${ETCDS[@]}; do
scp etcd etcdctl root@${instance}:/usr/local/bin/
done
生成证书
准备工作
安装cfssl
:
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl*
生成根证书
创建根证书配置文件:
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "876000h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
生成证书和私钥:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
输出文件:
ca.pem
ca.csr
ca-key.pem
生成其他证书
按照类似步骤生成admin
、kubelet
、kube-controller-manager
、kube-proxy
、kube-scheduler
、kube-apiserver
、Service Account
、proxy-client
等证书。
部署ETCD集群
配置etcd
证书文件:
mkdir -p /etc/etcd /var/lib/etcd
chmod 700 /var/lib/etcd
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
配置etcd.service
文件:
ETCD_NAME=$(hostname -s)
ETCD_IP=192.168.200.11
ETCD_NAMES=(cluster1 cluster2 cluster3)
ETCD_IPS=(192.168.200.11 192.168.200.22 192.168.200.33)
cat <<EOF > /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${ETCD_IP}:2380 \\
--listen-peer-urls https://${ETCD_IP}:2380 \\
--listen-client-urls https://${ETCD_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${ETCD_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster ${ETCD_NAMES[0]}=https://${ETCD_IPS[0]}:2380,${ETCD_NAMES[1]}=https://${ETCD_IPS[1]}:2380,${ETCD_NAMES[2]}=https://${ETCD_IPS[2]}:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
启动etcd
集群:
systemctl daemon-reload && systemctl enable etcd && systemctl start etcd