完全手工搭建Kubernets集群环境,本文中会一步一步详细展示整个过程, 包含环境、软件版本、详细的步骤
服务器说明
- Kubernets Version
- v1.22.15
- 节点要求
- 节点数 >= 3台
- CPUs >= 2
- Memory >= 2G
- 修改时区 有的系统时区不匹配,需要修改
timedatectl set-timezone Asia/Shanghai
- 环境说明
系统类型 | IP地址 | 节点角色 | CPU | Memory | Hostname |
---|---|---|---|---|---|
CentOS-7.9 | 192.168.200.11 | master | >=2 | >=2G | cluster1 |
CentOS-7.9 | 192.168.200.22 | master,worker | >=2 | >=2G | custer2 |
CentOS-7.9 | 192.168.200.33 | worker | >=2 | >=2G | cluster3 |
- 使用
Vagrant
搭建虚拟机节点
- Vagrant:
latest
- VirtualBox:
7.0
- vagrant-vbguest:
0.21
(挂载host
和guest
同步目录)
vagrant plugin install vagrant-vbguest --plugin-version 0.21
Vagrantfile
配置文件如下:
# -*- mode: ruby -*-
# vi: set ft=ruby :
nodes = [
{
:name => "cluster1",
:eth1 => "192.168.200.11",
:mem => "4096",
:cpu => "2"
},
{
:name => "cluster2",
:eth1 => "192.168.200.22",
:mem => "4096",
:cpu => "2"
},
{
:name => "cluster3",
:eth1 => "192.168.200.33",
:mem => "4096",
:cpu => "2"
},
]
Vagrant.configure("2") do |config|
# Every Vagrant development environment requires a box.
config.vm.box = "centos/7"
nodes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", opts[:mem]]
v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
end
#config.ssh.username = "root"
#config.ssh.private_key_path = "/Users/jinpeng.d/.ssh/id_rsa"
config.vm.synced_folder "../share", "/vagrant_data"
config.vm.network :public_network, ip: opts[:eth1]
config.vm.synced_folder "../share", "/vagrant_data"
end
end
end
系统设置(所有节点)
- 所有操作需要
root
权限 hostname
(/etc/hosts
)- 安装依赖包
yum update -y
yum install -y socat conntrack ipvsadm ipset jq sysstat curl iptables libseccomp yum-utils
- 关闭防火墙,
selinux
,swap
,重置iptables
# 1. 关闭selinux
setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
# 2. 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 3. 设置ipttables规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 4. 关闭swap
vi /etc/fstab
# 永久禁用注释掉swap
#/swapfile none swap defaults 0 0
# 临时禁用
swapoff -a
# 这里两者都用,临时修改可以即时生效,不用重启,永久禁用防止重启后不生效
# 5. 关闭dnsmasq(否则无法解析域名)
service dnsmasq stop && systemctl disable dnsmasq
kubernetes
参数设置
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
EOF
# 生效文件
sysctl -p /etc/sysctl.d/kubernetes.conf
- 配置免密登录
选择其中一个节点,或者一个单独的机器生成
ssh
公秘钥对,把公钥放在k8s
所有节点服务器上
# 生成公秘钥对, 如果没有可用的
ssh-keygen -t rsa
# 查看公钥内容
cat ~/.ssh/id_rsa.pub
# 每一台节点机器上配置
echo "<pubkey content>" >> ~/.ssh/authorized_keys
- 配置
IP
映射(每个节点)
cat > /etc/hosts <<EOF
192.168.200.11 cluster1
192.168.200.22 cluster2
192.168.200.33 cluster3
EOF
- 下载
k8s
组件包
export VERSION=v1.22.15
# 下载master节点组件
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubectl
# 下载worker节点组件
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-proxy
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubelet
# 下载etcd组件
wget https://github.com/etcd-io/etcd/releases/download/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz
tar -xvf etcd-v3.4.10-linux-amd64.tar.gz
mv etcd-v3.4.10-linux-amd64/etcd* .
rm -fr etcd-v3.4.10-linux-amd64*
- 分发软件包
# 把master相关组件分发到master节点
MASTERS=(cluster1 cluster2)
for instance in ${MASTERS[@]}; do
scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@${instance}:/usr/local/bin/
done
# 把worker先关组件分发到worker节点
WORKERS=(cluster2 cluster3)
for instance in ${WORKERS[@]}; do
scp kubelet kube-proxy root@${instance}:/usr/local/bin/
done
# 把etcd组件分发到etcd节点
ETCDS=(cluster1 cluster2 cluster3)
for instance in ${ETCDS[@]}; do
scp etcd etcdctl root@${instance}:/usr/local/bin/
done
生成证书
- 准备工作
- 安装
cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl*
-
kubernetes集群组件
-
服务端组件:
kube-apiserver
kube-controller-manager
kube-scheduler
kubectl
container runtime
(containerd)
-
客户端组件:
kubelet
kube-proxy
container runtime
(containerd)
-
- 生成根证书
根证书是集群所有节点共享的,只要创建一个
CA
证书,后续创建的所有证书都由它签名。 在可以登录到所有节点的控制台机器上创建pki
目录存放证书
- 根证书配置文件创建
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "876000h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
- 生成证书和密钥
生成证书和私钥, ca.pem
是证书, ca-key.pem
是证书私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
输出文件:
ca.pem
ca.csr
ca-key.pem
admin
客户端证书
admin
客户端证书配置文件
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "seven"
}
]
}
EOF
- 生成
admin
客户端证书和私钥(基于根证书和私钥以及证书配置文件)
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
输出文件:
admin.csr
admin.pem
admin-key.pem
kubelet
客户端证书
kubernetes
使用一种称为Node Authorizer
的专用授权模式来授权kubernetes
发出的API
请求。
kubelet
使用将其标识为system:nodes
组中的凭据,其用户名为system: node:nodeName
,
接下来给每个Worker
节点生成证书。
- 生成
kubelet
客户端证书配置文件- 设置
Worker
节点列表
- 设置
WORKERS=(cluster2 cluster3)
WORKER_IPS=(192.168.200.22 192.168.200.33)
for ((i=0;i<${#WORKERS[@]};i++)); do
cat > ${WORKERS[$i]}-csr.json <<EOF
{
"CN": "system:node:${WORKERS[$i]}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"O": "system:nodes",
"OU": "seven",
"ST": "Beijing"
}
]
}
EOF
done
- 生成
kubelet
客户端证书和密钥
for ((i=0;i<${#WORKERS[@]};i++)); do
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${WORKERS[$i]},${WORKER_IPS[$i]} \
-profile=kubernetes \
${WORKERS[$i]}-csr.json | cfssljson -bare ${WORKERS[$i]}
done
输出文件:
{worker-node-name}.csr
{worker-node-name}.pem
{worker-node-name}-key.pem
kube-controller-manager
客户端证书
kube-controller-manager
客户端证书配置文件
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "seven"
}
]
}
EOF
- 生成
kube-controller-manager
客户端证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
输出文件:
kube-controller-manager.csr
kube-controller-manager.pem
kube-controller-manager-key.pem
kube-proxy
客户端证书
kube-proxy
客户端证书配置文件
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "seven"
}
]
}
EOF
- 生成
kube-proxy
客户端证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
输出文件:
kube-proxy.csr
kube-proxy.pem
kube-proxy-key.pem
kube-scheduler
客户端证书
kube-scheduler
客户端证书配置
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-scheduler",
"OU": "seven"
}
]
}
EOF
- 生成
kube-scheduler
客户端证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
输出文件:
kube-scheduler.csr
kube-scheduler.pem
kube-scheduler-key.pem
kube-apiserver
服务端证书
kube-apiserver
服务端证书配置
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "seven"
}
]
}
EOF
-
生成
kube-apiserver
服务端证书服务端证书与客户端证书不同:
- 客户端证书需要通过一个名字或者
IP
去访问服务端,所以证书需要包含客户端所访问的名字或IP
,用以客户端验证 - 指定可能作为
master
的节点服务地址 apiserver
的service ip
地址(一般是svc
网段的第一个ip
)- 所有
master
内网IP和公网IP,逗号分割,(可以把所有节点写上,防止变换master
节点)
- 客户端证书需要通过一个名字或者
KUBERNETES_SVC_IP="10.233.0.1"
MASTER_IPS="192.168.200.11,192.168.200.22,192.168.200.33"
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${KUBERNETES_SVC_IP},${MASTER_IPS},127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
输出文件:
kubernetes.csr
kubernetes.pem
kubernetes-key.pem
Service Account
证书
- 配置文件
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "seven"
}
]
}
EOF
- 生成证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
输出文件:
service-account.csr
service-account.pem
service-account-key.pem
proxy-client
证书
- 配置文件
cat > proxy-client-csr.json <<EOF
{
"CN": "aggregator",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "seven"
}
]
}
EOF
- 生成证书
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
proxy-client-csr.json | cfssljson -bare proxy-client
输出文件:
proxy-client.csr
proxy-client.pem
proxy-client-key.pem
- 分发客户端,服务端证书
- 分发
Worker
节点需要的证书和私钥- 每个
Worker
节点证书和密钥
- 每个
WORKERS=("cluster2" "cluster3")
for instance in ${WORKERS[@]}; do
scp ca.pem ${instance}-key.pem ${instance}.pem root@${instance}:~/
done
- 分发
Master
节点需要的证书和私钥- 根证书和密钥(
ca*.pem
) kube-apiserver
证书和密钥(kubenetes*.pem
)service-account
证书和密钥(service-account*.pem
)proxy-client
证书和密钥(proxy-client*.pem
)
- 根证书和密钥(
MASTER_IPS=(cluster1 cluster2)
for instance in ${MASTER_IPS[@]}; do
scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem proxy-client.pem proxy-client-key.pem root@${instance}:~/
done
kubernetes
各组件的认证配置
kubernetes
的认证配置文件,称为kubeconfigs
,用于让kubernetes
的客户端定位
kube-apiserver
并通过apiserver
的安全认证。
controller-manager
kubelet
kube-proxy
scheduler
admin user
- 为
kubelet
生成kubeconfigs
(每个Worker节点)
WORKERS=("cluster2" "cluster3")
for instance in ${WORKERS[@]}; do
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
- 为
kube-proxy
生成kubeconfigs
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
- 为
kube-controller-manager
生成kubeconfigs
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
- 为
kube-scheduler
生成kubeconfigs
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
- 为
admin
用户生成kubeconfigs
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
- 分发
kubeconfigs
配置文件
kubelet
和kube-proxy
的kubeconfigs
分发到Worker
节点
WORKERS=("cluster2" "cluster3")
for instance in ${WORKERS[@]}; do
scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done