kubernetes-v1.14安装

1、环境准备

iptypedockerosk8s version
172.21.17.30master,etcdCentOS Linux release 7.4.1708v1.14.6
172.21.17.31master,etcdCentOS Linux release 7.4.1708
172.21.16.110master,etcdCentOS Linux release 7.4.1708
172.21.16.87node,flanneld18.06.2-ceCentOS Linux release 7.4.1708
172.21.16.240node,flanneld,ha+kee18.06.2-ceCentOS Linux release 7.4.1708
172.21.16.204node,flanneld,ha+kee18.06.2-ceCentOS Linux release 7.4.1708
172.21.16.45vipCentOS Linux release 7.4.1708

2、初始化系统

2.1、安装依赖包

每台服务器均操作,关闭防火墙,关闭selinux

1
2
# yum install -y epel-release
# yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget

2.2、关闭 swap 分区

        如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 –fail-swap-on 设置为 false 来忽略 swap on),故需要在每台机器上关闭 swap 分区。同时注释 /etc/fstab 中相应的条目,防止开机自动挂载 swap 分区。

1
2
# swapoff -a
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

2.3、加载内核模块

1
2
# modprobe ip_vs_rr
# modprobe br_netfilter
2.3.1 加载内核,加入开机启动
1
2
3
4
# cat > /etc/rc.local  << EOF
modprobe ip_vs_rr
modprobe br_netfilter
EOF
2.3.2 使用systemd-modules-load加载内核模块
1
2
3
4
5
6
7
8
9
# cat > /etc/modules-load.d/ipvs.conf << EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
br_netfilter
nf_conntrack_ipv4
EOF
# systemctl enable --now systemd-modules-load.service
2.3.3 验证模块是否加载成功
1
2
# lsmod |egrep " ip_vs_rr|br_netfilter"
为什么要使用IPVS,从k8s的1.8版本开始,kube-proxy引入了IPVS模式,IPVS模式与iptables同样基于Netfilter,但是采用的hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。

2.4、优化内核参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# cat /etc/sysctl.d/kubernetes.conf 
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720

# sysctl -p /etc/sysctl.d/kubernetes.conf
  • 必须关闭 tcp_tw_recycle,否则和 NAT 冲突,会导致服务不通;
  • 关闭 IPV6,防止触发 docker BUG;

2.5、设置系统时区

1
2
3
4
# timedatectl set-timezone Asia/Shanghai
# timedatectl set-local-rtc 0
# systemctl restart rsyslog
# systemctl restart crond

2.6、关闭无关的服务

1
# systemctl stop postfix && systemctl disable postfix

3、升级内核

        CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如:

  • 1.高版本的 docker(1.13 以后) 启用了 3.10 kernel 实验支持的 kernel memory account 功能(无法关闭),当节点压力大如频繁启动和停止容器时会导致 cgroup memory leak;
  • 2.网络设备引用计数泄漏,会导致类似于报错:”kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1”;
            解决方案如下:
  • 1.升级内核到 4.4.X 以上
  • 2.或者,手动编译内核,disable CONFIG_MEMCG_KMEM 特性
  • 3.或者,安装修复了该问题的 Docker 18.09.1 及以上的版本。但由于 kubelet 也会设置 kmem(它 vendor 了 runc),所以需要重新编译 kubelet 并指定 GOFLAGS=”-tags=nokmem”
            centos 7内核升级参考

4、创建CA证书和秘钥

        为确保安全,kubernetes 系统各组件需要使用 x509 证书对通信进行加密和认证。CA (Certificate Authority) 是自签名的根证书,用来签名后续创建的其它证书。使用 CloudFlare 的 PKI 工具集 cfssl 创建所有证书,证书均在一台master节点进行操作,然后通过远程分发到其他的服务器上去。

  • 注意: 每生成的证书均要进行分发到其他的master节点

4.1、安装 cfssl 工具集

1
2
3
4
5
# curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o cfssl
# curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o cfssljson
# curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o cfssl-certinfo
# chmod +x * &&mv cfssl* /usr/bin/
# scp /usr/bin/cfssl* {master-ip}:/usr/bin

4.2、创建根证书 (CA)

CA 证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。

4.3、创建配置文件

        CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等),后续在签名其它证书时需要指定特定场景。

1
# mkdir k8s && cd k8s#后面k8s生成所需要的证书均在该目录执行
  • ca-config.json

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    cat > ca-config.json <<EOF
    {
    "signing": {
    "default": {
    "expiry": "87600h"
    },
    "profiles": {
    "kubernetes": {
    "usages": [
    "signing",
    "key encipherment",
    "server auth",
    "client auth"
    ],
    "expiry": "87600h"
    }
    }
    }
    }
    EOF
  • signing:表示该证书可用于签名其它证书,生成的 ca.pem 证书中 CA=TRUE;

  • server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;

  • client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;

4.4、创建证书签名请求文件

  • ca-csr.json

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    {
    "CN": "kubernetes",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "k8s",
    "OU": "4Paradigm"
    }
    ],
    "ca": {
    "expiry": "876000h"
    }
    }
    EOF
  • CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),浏览器使用该字段验证网站是否合法;

  • O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

  • kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;

  • 生成 CA 证书和私钥

    1
    2
    3
    # cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    # ls ca*
    # mkdir -p /etc/kubernetes/ssl && cp ca*.pem ca-config.json /etc/kubernetes/ssl

5.部署 kubectl 命令行工具

        kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址和认证信息,如果没有配置,执行 kubectl 命令时可能会出错:

1
2
# kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • 注意:
  • 本文档只需要部署一次,生成的 kubeconfig 文件是通用的,可以拷贝到需要执行 kubectl 命令的机器,重命名为 ~/.kube/config;

5.1、下载和分发 kubectl 二进制文件

这里吧把node和master所需要的包均给一次性分发

1
2
3
4
5
6
7
8
# wget https://dl.k8s.io/v1.14.6/kubernetes-server-linux-amd64.tar.gz
# tar -xzvf kubernetes-client-linux-amd64.tar.gz

# master 节点
# scp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} {master-ip}:/usr/bin/

# node 节点
# scp kubernetes/server/bin/{kube-proxy,kubelet} {node-ip}:/usr/bin/

5.2、创建 admin 证书和私钥

kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。
kubectl 作为集群的管理工具,需要被授予最高权限,这里创建具有最高权限的 admin 证书。

  • 创建证书签名请求:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    # cat > admin-csr.json <<EOF
    {
    "CN": "admin",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "system:masters",
    "OU": "4Paradigm"
    }
    ]
    }
    EOF
  • O 为 system:masters,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;

  • 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予所有 API的权限;

  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

  • 生成证书和私钥

    1
    2
    3
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json  -profile=kubernetes admin-csr.json | cfssljson -bare admin
    # ls admin*
    # cp admin*.pem /etc/kubernetes/ssl/

5.3、创建 kubeconfig 文件

        kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 设置集群API地址
# KUBE_APISERVER="https://172.21.16.45:8443"

# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=kubectl.kubeconfig

# 设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin \
--kubeconfig=kubectl.kubeconfig

# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
  • 提示: 分发kubectl.kubeconfig文件,吧文件命名~/.kube/config;
  • –certificate-authority:验证 kube-apiserver 证书的根证书;
  • –client-certificate、–client-key:刚生成的 admin 证书和私钥,连接 kube-apiserver 时使用;
  • –embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(不加时,写入的是证书文件路径,后续拷贝 kubeconfig 到其它机器时,还需要单独拷贝证书文件,不方便。);

6、部署etcd集群

        etcd 是基于 Raft 的分布式 key-value 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 存储所有运行数据。

三节点高可用 etcd 集群的步骤:

  • 下载和分发 etcd 二进制文件;
  • 创建 etcd 集群各节点的 x509 证书,用于加密客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的数据流;
  • 创建 etcd 的 systemd unit 文件,配置服务参数
  • 检查集群工作状态;
  • 注意: 均在一台master[etcd]节点操作,其他master[etcd]节点通过分发

6.1、下载和分发 etcd 二进制文件

1
2
3
4
# mkdir etcd &&cd etcd
# https://github.com/coreos/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
# tar -xvf etcd-v3.3.13-linux-amd64.tar.gz
# scp etcd* {master-ip}:/usr/bin/

6.2、创建 etcd 证书和私钥

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"172.21.17.30",
"172.21.17.31",
"172.21.16.110"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
  • hosts 字段指定授权使用该证书的 etcd 节点 IP 或域名列表,需要将 etcd 集群的三个节点 IP 都列在其中;
  • 生成证书和私钥
    1
    2
    3
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
    # ls etcd*pem
    # mkdir -p /etc/etcd/ssl && cp etcd*pem /etc/etcd/ssl/

6.3、创建 etcd 的 systemd unit 模板文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/data
ExecStart=/usr/bin/etcd \
--data-dir=/var/lib/etcd/data \
--wal-dir=/var/lib/etcd/wal \
--name=etcd1 \#根据节点名称进行变化
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--listen-peer-urls=https://172.21.17.30:2380 \
--initial-advertise-peer-urls=https://172.21.17.30:2380 \
--listen-client-urls=https://172.21.17.30:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://172.21.17.30:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=etcd1=https://172.21.17.30:2380,etcd2=https://172.21.17.31:2380,etcd3=https://172.21.16.110:2380 \
--initial-cluster-state=new \
--auto-compaction-mode=periodic \
--auto-compaction-retention=1 \
--max-request-bytes=33554432 \
--quota-backend-bytes=6442450944 \
--heartbeat-interval=250 \
--election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# mkdir -p /var/lib/etcd/{data,wal}
  • WorkingDirectory、–data-dir:指定工作目录和数据目录为 ${ETCD_DATA_DIR},需在启动服务前创建这个目录;
  • –wal-dir:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 –data-dir 不同的磁盘;
  • –name:指定节点名称,当 –initial-cluster-state 值为 new 时,–name 的参数值必须位于 –initial-cluster 列表中;
  • –cert-file、–key-file:etcd server 与 client 通信时使用的证书和私钥;
  • –trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
  • –peer-cert-file、–peer-key-file:etcd 与 peer 通信使用的证书和私钥;
  • –peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书;

6.4、启动 etcd 服务

1
# systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd && systemctl status etcd

6.5、检查启动结果

  • 确保状态为 active (running),否则查看日志,确认原因:
    1
    # journalctl -u etcd

6.6、验证服务状态

部署完 etcd 集群后,在任一 etcd 节点上执行如下命令:

1
2
3
4
5
# ETCDCTL_API=3 etcdctl \
--endpoints=https://172.21.17.31:2379 \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem endpoint health

检查输出均为 healthy 时表示集群服务正常

6.7、查看当前的 leader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# ETCD_ENDPOINTS="https://172.21.17.30:2379,https://172.21.17.31:2379,https://172.21.16.110:2379"
# ETCDCTL_API=3 etcdctl \
-w table --cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints=${ETCD_ENDPOINTS} endpoint status
# 输出
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://172.21.17.30:2379 | 5d23ebc4382fa16f | 3.3.13 | 1.2 MB | false | 83 | 58127 |
| https://172.21.17.31:2379 | ceaae5134701946 | 3.3.13 | 1.2 MB | false | 83 | 58127 |
| https://172.21.16.110:2379 | 575020c8e15d3a06 | 3.3.13 | 1.2 MB | true | 83 | 58128 |
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
  • 当前的 leader 为 172.21.16.110

7、部署 flannel 网络

flannel 网络部署在node节点,证书在master节点生成分发

        kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。flannel 使用 vxlan 技术为各节点创建一个可以互通的 Pod 网络,使用的端口为 UDP 8472(需要开放该端口)
        flanneld 第一次启动时,从 etcd 获取配置的 Pod 网段信息,为本节点分配一个未使用的地址段,然后创建 flannedl.1 网络接口(也可能是其它名称,如 flannel1 等)
        flannel 将分配给自己的 Pod 网段信息写入 /run/flannel/docker 文件,docker 后续使用这个文件中的环境变量设置 docker0 网桥,从而从这个地址段为本节点的所有 Pod 容器分配 IP。

7.1、下载和分发 flanneld 二进制文件

1
2
3
# mkdir flannel &&cd flannel
# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
# tar -xzvf flannel-v0.11.0-linux-amd64.tar.gz -C flannel
  • 分发flanneld 可执行文件到node节点

7.2、创建 flannel 证书和私钥

  • flanneld-csr.json

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    cat > flanneld-csr.json <<EOF
    {
    "CN": "flanneld",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "k8s",
    "OU": "4Paradigm"
    }
    ]
    }
    EOF
  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

  • 生成证书和私钥:

    1
    2
    3
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld 
    # ls flanneld*pem
    # scp flanneld*pem {node-ip}:/etc/flanneld/ssl/

7.3、向 etcd 写入集群 Pod 网段信息

注意:本步骤只需执行一次。在etcd集群上执行

1
2
3
4
5
6
etcdctl \
--endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=flanneld.pem \
--key-file=flanneld-key.pem \
mk /kubernetes/network/config '{"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'
  • flanneld 当前版本 (v0.11.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;
  • 写入的 Pod 网段 ${CLUSTER_CIDR} 地址段(如 /16)必须小于 SubnetLen,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

7.4、创建 flanneld 的 systemd unit 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/bin/flanneld \
-etcd-cafile=/etc/kubernetes/ssl/ca.pem \
-etcd-certfile=/etc/flanneld/ssl/flanneld.pem \
-etcd-keyfile=/etc/flanneld/ssl/flanneld-key.pem \
-etcd-endpoints=https://172.21.17.30:2379,https://172.21.17.31:2379,https://172.21.16.110:2379 \
-etcd-prefix=/kubernetes/network \
-iface=eth0 \
-ip-masq
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
  • mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网段信息写入 /run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥;
  • flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;
  • flanneld 运行时需要 root 权限;
  • -ip-masq: flanneld 为访问 Pod 网络外的流量设置 SNAT 规则,同时将传递给 Docker 的变量 –ip-masq(/run/flannel/docker 文件中)设置为 false,这样 Docker 将不再创建 SNAT 规则; Docker 的 –ip-masq 为 true 时,创建的 SNAT 规则比较“暴力”:将所有本节点 Pod 发起的、访问非 docker0 接口的请求做 SNAT,这样访问其他节点 Pod 的请求来源 IP 会被设置为 flannel.1 接口的 IP,导致目的 Pod 看不到真实的来源 Pod IP。 flanneld 创建的 SNAT 规则比较温和,只对访问非 Pod 网段的请求做 SNAT。

7.5、启动 flanneld 服务

1
# systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld && systemctl status flanneld

7.6、检查分配给各 flanneld 的 Pod 网段信息

  • 查看集群 Pod 网段(/16)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # etcdctl \
    --endpoints=${ETCD_ENDPOINTS} \
    --ca-file=/etc/kubernetes/ssl/ca.pem \
    --cert-file=/etc/flanneld/ssl/flanneld.pem \
    --key-file=/etc/flanneld/ssl/flanneld-key.pem \
    get /kubernetes/network/config

    # 输出结果
    {"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}
  • 查看已分配的 Pod 子网段列表(/24):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    # etcdctl \
    --endpoints=${ETCD_ENDPOINTS} \
    --ca-file=/etc/kubernetes/ssl/ca.pem \
    --cert-file=/etc/flanneld/ssl/flanneld.pem \
    --key-file=/etc/flanneld/ssl/flanneld-key.pem \
    ls /kubernetes/network/subnets

    # 输出
    /kubernetes/network/subnets/172.30.232.0-21
    /kubernetes/network/subnets/172.30.128.0-21
    /kubernetes/network/subnets/172.30.176.0-21
  • 查看某一 Pod 网段对应的节点 IP 和 flannel 接口地址

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # etcdctl \
    --endpoints=${ETCD_ENDPOINTS} \
    --ca-file=/etc/kubernetes/ssl/ca.pem \
    --cert-file=/etc/flanneld/ssl/flanneld.pem \
    --key-file=/etc/flanneld/ssl/flanneld-key.pem \
    get /kubernetes/network/subnets/172.30.232.0-21

    # 输出
    {"PublicIP":"172.21.16.204","BackendType":"vxlan","BackendData":{"VtepMAC":"f6:50:05:5c:9a:20"}}
  • 172.30.232.0/21 被分配给节点172.21.16.204);

  • VtepMAC 为172.21.16.204节点的 flannel.1 网卡 MAC 地址;

7.7、检查节点 flannel 网络信息

1
# ip addr show
  • flannel.1 网卡的地址为分配的 Pod 子网段的第一个 IP(.0),且是 /32 的地址;

    1
    2
    3
    # ip route show |grep flannel.1
    172.30.128.0/21 via 172.30.128.0 dev flannel.1 onlink
    172.30.176.0/21 via 172.30.176.0 dev flannel.1 onlink
  • 到其它节点 Pod 网段请求都被转发到 flannel.1 网卡;

  • flanneld 根据 etcd 中子网段的信息,如/kubernetes/network/subnets/172.30.232.0-21 ,来决定进请求发送给哪个节点的互联 IP;

  • 验证各节点能通过 Pod 网段互通

8、master节点部署

kubernetes master 节点运行如下组件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行:
    1、kube-scheduler 和 kube-controller-manager 会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性;
    2、kube-apiserver 是无状态的,需要通过haproxy+keepalived进行代理访问,从而保证服务可用性;

8.1、创建 kubernetes 证书和私钥

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"172.21.17.30",
"172.21.17.31",
"172.21.16.110",
"172.21.16.45",
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local."
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
  • hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名,以及VIP地址;
  • kubernetes 服务 IP 是 apiserver 自动创建的,一般是 –service-cluster-ip-range 参数指定的网段的第一个IP,后续可以通过下面命令获取:
1
2
3
# kubectl get svc kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 4h13m
  • 生成证书和私钥
    1
    2
    3
    4
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
    # ls kubernetes*pem
    kubernetes-key.pem kubernetes.pem
    # cp kubernetes*pem /etc/kubernetes/ssl/

8.2、创建加密配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
# cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
# cp encryption-config.yaml /etc/kubernetes/

        v1.7+版本后新特性,kube-apiserver 将Secret数据加密后存储到etcd中,但是需要在kube-apiserver启动时配置–experimental-encryption-provider-config,这里定义加密配置格式如下,并且需要把这个配置文件分发到所有master服务器.

8.3、创建审计策略文件

        Kubernetes日志审计是Kube-apiserver组件的一部分,它提供了与安全相关的日志操作,日志中记录了单个用户、管理员或系统其它组件在与kube-apiserver交互请求时的全部请求处理过程。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
# cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
# The following requests were manually identified as high-volume and low-risk, so drop them.
- level: None
resources:
- group: ""
resources:
- endpoints
- services
- services/status
users:
- 'system:kube-proxy'
verbs:
- watch

- level: None
resources:
- group: ""
resources:
- nodes
- nodes/status
userGroups:
- 'system:nodes'
verbs:
- get

- level: None
namespaces:
- kube-system
resources:
- group: ""
resources:
- endpoints
users:
- 'system:kube-controller-manager'
- 'system:kube-scheduler'
- 'system:serviceaccount:kube-system:endpoint-controller'
verbs:
- get
- update

- level: None
resources:
- group: ""
resources:
- namespaces
- namespaces/status
- namespaces/finalize
users:
- 'system:apiserver'
verbs:
- get

# Don't log HPA fetching metrics.
- level: None
resources:
- group: metrics.k8s.io
users:
- 'system:kube-controller-manager'
verbs:
- get
- list

# Don't log these read-only URLs.
- level: None
nonResourceURLs:
- '/healthz*'
- /version
- '/swagger*'

# Don't log events requests.
- level: None
resources:
- group: ""
resources:
- events

# node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- nodes/status
- pods/status
users:
- kubelet
- 'system:node-problem-detector'
- 'system:serviceaccount:kube-system:node-problem-detector'
verbs:
- update
- patch

- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- nodes/status
- pods/status
userGroups:
- 'system:nodes'
verbs:
- update
- patch

# deletecollection calls can be large, don't log responses for expected namespace deletions
- level: Request
omitStages:
- RequestReceived
users:
- 'system:serviceaccount:kube-system:namespace-controller'
verbs:
- deletecollection

# Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
# so only log at the Metadata level.
- level: Metadata
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- secrets
- configmaps
- group: authentication.k8s.io
resources:
- tokenreviews
# Get repsonses can be large; skip them.
- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apiregistration.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: extensions
- group: metrics.k8s.io
- group: networking.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: settings.k8s.io
- group: storage.k8s.io
verbs:
- get
- list
- watch

# Default level for known APIs
- level: RequestResponse
omitStages:
- RequestReceived
resources:
- group: ""
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apiregistration.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: extensions
- group: metrics.k8s.io
- group: networking.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: settings.k8s.io
- group: storage.k8s.io

# Default level for all other requests.
- level: Metadata
omitStages:
- RequestReceived
EOF

# cp audit-policy.yaml /etc/kubernetes/

8.4、创建后续访问 metrics-server 使用的证书

  • 创建证书签名请求:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    # cat > proxy-client-csr.json <<EOF
    {
    "CN": "aggregator",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "k8s",
    "OU": "4Paradigm"
    }
    ]
    }
    EOF
  • CN 名称需要位于 kube-apiserver 的 –requestheader-allowed-names 参数中,否则后续访问 metrics 时会提示权限不足。

  • 生成证书和私钥

    1
    2
    3
    4
    # cfssl gencert -ca=ca.pem  -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
    # ls proxy-client*.pem
    proxy-client-key.pem proxy-client.pem
    # cp proxy-client*.pem /etc/kubernetes/ssl/

8.5、创建 kube-apiserver systemd unit 模板文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
# cat /usr/lib/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/log/k8s/kube-apiserver
ExecStart=/usr/bin/kube-apiserver \
--advertise-address=172.21.17.30 \#master 节点的ip
--default-not-ready-toleration-seconds=360 \
--default-unreachable-toleration-seconds=360 \
--feature-gates=DynamicAuditing=true \
--max-mutating-requests-inflight=2000 \
--max-requests-inflight=4000 \
--default-watch-cache-size=200 \
--delete-collection-workers=2 \
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://172.21.17.30:2379,https://172.21.17.31:2379,https://172.21.16.110:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--insecure-port=0 \
--audit-dynamic-configuration \
--audit-log-maxage=15 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-truncate-enabled \
--audit-log-path=/var/log/k8s/kube-apiserver/audit.log \
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
--profiling \
--anonymous-auth=false \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--enable-bootstrap-token-auth \
--requestheader-allowed-names=aggregator \
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--service-account-key-file=/etc/kubernetes/ssl/ca.pem \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeClaimResize,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority,PodPreset \
--allow-privileged=true \
--apiserver-count=3 \
--cors-allowed-origins=.* \
--enable-swagger-ui \
--event-ttl=168h \
--kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \
--kubelet-https=true \
--kubelet-timeout=10s \
--proxy-client-cert-file=/etc/kubernetes/ssl/proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/ssl/proxy-client-key.pem \
--service-cluster-ip-range=10.254.0.0/16 \
--service-node-port-range=30000-32767 \
--logtostderr=true \
--enable-aggregator-routing=true \
--v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# mkdir -p /var/log/k8s/kube-apiserver
  • –advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
  • –default-*-toleration-seconds:设置节点异常相关的阈值;
  • –max-*-requests-inflight:请求相关的最大阈值;
  • –etcd-*:访问 etcd 的证书和 etcd 服务器地址;
  • –experimental-encryption-provider-config:指定用于加密 etcd 中 secret 的配置;
  • –bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
  • –secret-port:https 监听端口;
  • –insecure-port=0:关闭监听 http 非安全端口(8080);
  • –tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
  • –audit-*:配置审计策略和审计日志文件相关的参数;
  • –client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
  • –enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
  • –requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
  • –requestheader-client-ca-file:用于签名 –proxy-client-cert-file 和 –proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
  • –requestheader-allowed-names:不能为空,值为逗号分割的 –proxy-client-cert-file 证书的 CN 名称,这里设置为 “aggregator”;
  • –service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 –service-account-private-key-file 指定私钥文件,两者配对使用;
  • –runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
  • –authorization-mode=Node,RBAC、–anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
  • –enable-admission-plugins:启用一些默认关闭的 plugins;
  • –allow-privileged:运行执行 privileged 权限的容器;
  • –apiserver-count=3:指定 apiserver 实例的数量;
  • –event-ttl:指定 events 的保存时间;
  • –kubelet-:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
  • –proxy-client-*:apiserver 访问 metrics-server 使用的证书;
  • –service-cluster-ip-range: 指定 Service Cluster IP 地址段;
  • –service-node-port-range: 指定 NodePort 的端口范围;

注意:
1.requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth;
2.如果 –requestheader-allowed-names 为空,或者 –proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败,提示:

1
2
# kubectl top nodes
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope

8.6、启动 kube-apiserver 服务

1
2
3
# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver &&systemctl status kube-apiserver
# systemctl status kube-apiserver |grep 'Active:'
Active: active (running) since Mon 2019-09-16 14:38:31 CST; 1min 41s ago

8.6、打印 kube-apiserver 写入 etcd 的数据

1
2
3
4
5
6
# ETCDCTL_API=3 etcdctl \
--endpoints=${ETCD_ENDPOINTS} \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
get /registry/ --prefix --keys-only

8.9、检查集群信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# kubectl cluster-info
Kubernetes master is running at https://172.21.16.45:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

# kubectl get all --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 12m

# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
  • 执行 kubectl get componentstatuses 命令时,apiserver 默认向 127.0.0.1 发送请求。当 controller-manager、scheduler 以集群模式运行时,有可能和 kube-apiserver 不在一台机器上,这时 controller-manager 或 scheduler 的状态为 Unhealthy,但实际上它们工作正常。

8.10、检查 kube-apiserver 监听的端口

1
2
# netstat -lnpt|grep kube
tcp6 0 0 :::6443 :::* LISTEN 10845/kube-apiserve
  • 6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
  • 由于关闭了非安全端口,故没有监听 8080;

8.11、授予 kube-apiserver 访问 kubelet API 的权限

        在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes)访问 kubelet API 的权限:

1
2
3
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
kubectl create clusterrolebinding kubelet-node-clusterbinding --clusterrole=system:node --group=system:nodes
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin

9、部署高可用 kube-controller-manager

        该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。而kubernetes集群是采用租赁锁实现leader选举,需要在启动参数中加入–leader-elect=true
为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:
1、与 kube-apiserver 的安全端口通信;
2、在安全端口(https,10252) 输出 prometheus 格式的 metrics;

9.1、创建 kube-controller-manager 证书和私钥

  • 创建证书签名请求

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    # cat > kube-controller-manager-csr.json <<EOF
    {
    "CN": "system:kube-controller-manager",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "hosts": [
    "127.0.0.1",
    "172.21.17.30",
    "172.21.17.31",
    "172.21.16.110"
    ],
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "system:kube-controller-manager",
    "OU": "4Paradigm"
    }
    ]
    }
    EOF
  • hosts 列表包含所有 kube-controller-manager 节点 IP;

  • CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。

  • 生成证书和私钥

    1
    2
    3
    4
    5
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json   -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
    # ls kube-controller-manager*pem
    kube-controller-manager-key.pem kube-controller-manager.pem

    # cp kube-controller-manager*pem /etc/kubernetes/ssl/

9.2、创建和分发 kubeconfig 文件

        kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-controller-manager.kubeconfig

# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

# kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

# cp kube-controller-manager.kubeconfig /etc/kubernetes/

9.3、创建 kube-controller-manager systemd unit文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/var/log/k8s/kube-controller-manager
ExecStart=/usr/bin/kube-controller-manager \
--profiling \
--cluster-name=kubernetes \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--controllers=*,bootstrapsigner,tokencleaner \
--kube-api-qps=1000 \
--kube-api-burst=2000 \
--leader-elect \
--use-service-account-credentials\
--concurrent-service-syncs=2 \
--bind-address=0.0.0.0 \
--secure-port=10257 \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--port=10252 \
--authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--requestheader-allowed-names="" \
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=876000h \
--horizontal-pod-autoscaler-sync-period=10s \
--horizontal-pod-autoscaler-use-rest-clients=true \
--concurrent-deployment-syncs=10 \
--concurrent-gc-syncs=30 \
--node-cidr-mask-size=24 \
--service-cluster-ip-range=10.254.0.0/16 \
--pod-eviction-timeout=6m \
--terminated-pod-gc-threshold=10000 \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--logtostderr=true \
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  • –port=10252: 提供HTTP服务,不认证,如果设置0,不提供HTTP服务,默认值是10252
  • –secure-port=10257: 提供HTTPS服务,默认端口为10257,如果为0,不提供https服务
  • –kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
  • –authentication-kubeconfig 和 –authorization-kubeconfig:kube-controller-manager 使用它连接 apiserver,对 client 的请求进行认证和授权。kube-controller-manager 不再使用 –tls-ca-file 对请求 https metrics 的 Client 证书进行校验。如果没有配置这两个 kubeconfig 参数,则 client 连接 kube-controller-manager https 端口的请求会被拒绝(提示权限不足)。
  • –cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;
  • –experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
  • –root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
  • –service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 –service-account-key-file 指定的公钥文件配对使用;
  • –service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
  • –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
  • –controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
  • –horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
  • –tls-cert-file、–tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
  • –use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serviceaccount 访问 kube-apiserver;

9.4、启动 kube-controller-manager 服务

1
2
3
4
5
6
# mkdir -p /var/log/k8s/kube-controller-manager
# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager && systemctl status kube-controller-manager

# netstat -lnpt|grep kube-con
tcp6 0 0 :::10252 :::* LISTEN 8480/kube-controlle
tcp6 0 0 :::10257 :::* LISTEN 8480/kube-controlle
  • 授予 kubernetes API 的权限
    1
    2
    kubectl create clusterrolebinding controller-node-clusterrolebing --clusterrole=system:kube-controller-manager  --user=system:kube-controller-manager
    kubectl create clusterrolebinding controller-manager:system:auth-delegator --user system:kube-controller-manager --clusterrole system:auth-delegator

9.5、查看输出的 metrics

注意: 以下命令在 kube-controller-manager 节点上执行。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# curl -s --cacert /etc/kubernetes/ssl/ca.pem --cert /etc/kubernetes/ssl/admin.pem --key /etc/kubernetes/ssl/admin-key.pem https://172.21.17.30:10257/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

# curl -s --cacert /etc/kubernetes/ssl/ca.pem --cert /etc/kubernetes/ssl/admin.pem --key /etc/kubernetes/ssl/admin-key.pem http://172.21.17.30:10252/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

9.6 kube-controller-manager 的权限

        ClusteRole system:kube-controller-manager 的权限很小,只能创建 secret、serviceaccount 等资源对象,各 controller 的权限分散到 ClusterRole system:controller:XXX 中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# kubectl describe clusterrole system:kube-controller-manager
Name: system:kube-controller-manager
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
secrets [] [] [create delete get update]
endpoints [] [] [create get update]
serviceaccounts [] [] [create get update]
events [] [] [create patch update]
tokenreviews.authentication.k8s.io [] [] [create]
subjectaccessreviews.authorization.k8s.io [] [] [create]
configmaps [] [] [get]
namespaces [] [] [get]
*.* [] [] [list watch]

        需要在 kube-controller-manager 的启动参数中添加 –use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 ServiceAccount XXX-controller。内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# kubectl get clusterrole|grep controller
system:controller:attachdetach-controller 4h52m
system:controller:certificate-controller 4h52m
system:controller:clusterrole-aggregation-controller 4h52m
system:controller:cronjob-controller 4h52m
system:controller:daemon-set-controller 4h52m
system:controller:deployment-controller 4h52m
system:controller:disruption-controller 4h52m
system:controller:endpoint-controller 4h52m
system:controller:expand-controller 4h52m
system:controller:generic-garbage-collector 4h52m
system:controller:horizontal-pod-autoscaler 4h52m
system:controller:job-controller 4h52m
system:controller:namespace-controller 4h52m
system:controller:node-controller 4h52m
system:controller:persistent-volume-binder 4h52m
system:controller:pod-garbage-collector 4h52m
system:controller:pv-protection-controller 4h52m
system:controller:pvc-protection-controller 4h52m
system:controller:replicaset-controller 4h52m
system:controller:replication-controller 4h52m
system:controller:resourcequota-controller 4h52m
system:controller:route-controller 4h52m
system:controller:service-account-controller 4h52m
system:controller:service-controller 4h52m
system:controller:statefulset-controller 4h52m
system:controller:ttl-controller 4h52m
system:kube-controller-manager 4h52m
  • 以 deployment controller 为例:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    # kubectl describe clusterrole system:controller:deployment-controller
    Name: system:controller:deployment-controller
    Labels: kubernetes.io/bootstrapping=rbac-defaults
    Annotations: rbac.authorization.kubernetes.io/autoupdate: true
    PolicyRule:
    Resources Non-Resource URLs Resource Names Verbs
    --------- ----------------- -------------- -----
    replicasets.apps [] [] [create delete get list patch update watch]
    replicasets.extensions [] [] [create delete get list patch update watch]
    events [] [] [create patch update]
    pods [] [] [get list update watch]
    deployments.apps [] [] [get list update watch]
    deployments.extensions [] [] [get list update watch]
    deployments.apps/finalizers [] [] [update]
    deployments.apps/status [] [] [update]
    deployments.extensions/finalizers [] [] [update]
    deployments.extensions/status [] [] [update]

9.7、查看当前的 leader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master-01-3.kxl_9055b4e1-53a7-11ea-8502-fa163e53d4c8","leaseDurationSeconds":15,"acquireTime":"2020-02-20T06:09:51Z","renewTime":"2020-02-20T06:13:09Z","leaderTransitions":72}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"labels":{"k8s-app":"kube-controller-manager"},"name":"kube-controller-manager","namespace":"kube-system"},"subsets":[{"addresses":[{"ip":"172.21.17.30"},{"ip":"172.21.17.31"},{"ip":"172.21.16.110"}],"ports":[{"name":"https-metrics","port":10252,"protocol":"TCP"}]}]}
creationTimestamp: "2019-12-10T02:24:39Z"
labels:
k8s-app: kube-controller-manager
name: kube-controller-manager
namespace: kube-system
resourceVersion: "21473672"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: 37c7aa58-1af4-11ea-b1ca-fa163effd55b
subsets:
- addresses:
- ip: 172.21.16.110
- ip: 172.21.17.30
- ip: 172.21.17.31
ports:
- name: https-metrics
port: 10252
protocol: TCP

当前的 leader 为k8s-master-01-3节点。

测试 kube-controller-manager 集群的高可用,停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。

10、scheduler集群

        3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:
1.与 kube-apiserver 的安全端口通信;
2.在安全端口(https,10251) 输出 prometheus 格式的 metrics;

10.1、创建 kube-scheduler 证书和私钥

  • 创建证书签名请求

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    # cat > kube-scheduler-csr.json <<EOF
    {
    "CN": "system:kube-scheduler",
    "hosts": [
    "127.0.0.1",
    "172.21.17.30",
    "172.21.17.31",
    "172.21.16.110"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "system:kube-scheduler",
    "OU": "4Paradigm"
    }
    ]
    }
    EOF
  • hosts 列表包含所有 kube-scheduler 节点 IP;

  • CN 和 O 均为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限;

  • 生成证书和私钥:

    1
    2
    3
    4
    5
    6
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

    # ls kube-scheduler*pem
    kube-scheduler-key.pem kube-scheduler.pem

    # cp kube-scheduler*pem /etc/kubernetes/ssl/

10.2、创建和分发 kubeconfig 文件

        kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-scheduler.kubeconfig

# kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig

# kubectl config set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

# cp kube-scheduler.kubeconfig /etc/kubernetes/

10.3、创建 kube-scheduler 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# cat >kube-scheduler.yaml <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
burst: 200
kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 127.0.0.1:10251
leaderElection:
leaderElect: true
metricsBindAddress: ##NODE_IP##:10251
EOF

# cp kube-scheduler.yaml /etc/kubernetes/
  • –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
  • –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

10.4、创建 kube-scheduler systemd unit 模板文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/var/log/k8s/kube-scheduler
ExecStart=/usr/bin/kube-scheduler \
--config=/etc/kubernetes/kube-scheduler.yaml \
--bind-address=0.0.0.0 \
--secure-port=10259 \
--port=0 \
--tls-cert-file=/etc/kubernetes/ssl/kube-scheduler.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-scheduler-key.pem \
--authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--requestheader-allowed-names="" \
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
--requestheader-extra-headers-prefix="X-Remote-Extra-" \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--logtostderr=true \
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
  • –secure-port=10259: 监听的安全端口,设置为0,不提供安全端口
  • –port=0: 监听非安全端口,设置为0,不提供非安全端口,默认10251

10.5、启动 kube-scheduler 服务

1
2
# mkdir -p /var/log/k8s/kube-scheduler
# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler && systemctl status kube-scheduler
  • 授予 kubernetes API 的权限
    1
    kubectl create clusterrolebinding scheduler-node-clusterrolebing  --clusterrole=system:kube-scheduler --user=system:kube-scheduler

10.6、查看输出的 metrics

kube-scheduler 监听 10251 和 10251 端口:

  • 10251:接收 http 请求,非安全端口,不需要认证授权
  • 10259:接收 https 请求,安全端口,需要认证授权

两个接口都对外提供 /metrics 和 /healthz 的访问。

1
2
3
4
# netstat -lnpt |grep kube-sch
tcp 0 0 172.21.17.30:10251 0.0.0.0:* LISTEN 1344/kube-scheduler
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 1344/kube-scheduler
tcp6 0 0 :::10259 :::* LISTEN 1344/kube-scheduler
1
2
3
4
5
6
7
8
9
10
11
# curl -s http://172.21.17.30:10251/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
1
2
3
4
5
6
7
8
9
10
11
# curl -s --cacert /etc/kubernetes/ssl/ca.pem --cert /etc/kubernetes/ssl/admin.pem --key /etc/kubernetes/ssl/admin-key.pem https://172.21.17.30:10259/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

10.7、查看当前的 leader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#  kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master-01.kxl_5f352489-52f6-11ea-895e-fa163effd55b","leaseDurationSeconds":15,"acquireTime":"2020-02-19T21:00:18Z","renewTime":"2020-02-20T06:17:25Z","leaderTransitions":69}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"labels":{"k8s-app":"kube-scheduler"},"name":"kube-scheduler","namespace":"kube-system"},"subsets":[{"addresses":[{"ip":"172.21.17.30"},{"ip":"172.21.17.31"},{"ip":"172.21.16.110"}],"ports":[{"name":"http-metrics","port":10251,"protocol":"TCP"}]}]}
creationTimestamp: "2019-11-27T09:36:05Z"
labels:
k8s-app: kube-scheduler
name: kube-scheduler
namespace: kube-system
resourceVersion: "21474441"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 55570acc-10f9-11ea-bee0-fa163effd55b
subsets:
- addresses:
- ip: 172.21.16.110
- ip: 172.21.17.30
- ip: 172.21.17.31
ports:
- name: http-metrics
port: 10251
protocol: TCP

10.8 检查集群endpoints状态

1
2
3
# kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 172.21.16.110:6443,172.21.17.30:6443,172.21.17.31:6443 18h

kubernetes配置参数详解参考官方

坚持原创技术分享,您的支持将鼓励我继续创作!
0%