kubernetes v1.13.3安装

1、 环境准备

iptypedockerosk8s version
172.21.17.4master,etcdCentOS Linux release 7.4.1708v1.13.3
172.21.16.230master,etcdCentOS Linux release 7.4.1708
172.21.16.240master,etcdCentOS Linux release 7.4.1708
172.21.16.244node,flanneld,ha+kee18.06.2-ceCentOS Linux release 7.4.1708
172.21.16.248node,flanneld,ha+kee18.06.2-ceCentOS Linux release 7.4.1708
172.21.16.45vipCentOS Linux release 7.4.1708

2、部署ETC集群

    etcd的正常运行是k8s集群运行的提前条件,因此部署k8s集群首先部署etcd集群。安装CA证书,安装CFSSL证书管理工具。直接下载二进制安装包

2.1、下载cfssl

1
2
3
4
# curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
# curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
# curl -o cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
# chmod +x * &&mv cfssl* /usr/bin/

2.2 、创建etcd证书

  • etcd-ca-csr.json

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # mkdir etcd_ssl && cd etcd_ssl
    # cat etcd-ca-csr.json
    {
    "CN": "etcd-ca",
    "key": {
    "algo": "rsa",
    "size": 4096
    },
    "names": [
    {
    "O": "etcd",
    "OU": "etcd Security",
    "L": "Beijing",
    "ST": "Beijing",
    "C": "CN"
    }
    ],
    "ca": {
    "expiry": "87600h"
    }
    }
  • etcd-gencert.json

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    # cat etcd-gencert.json
    {
    "signing": {
    "default": {
    "usages": [
    "signing",
    "key encipherment",
    "server auth",
    "client auth"
    ],
    "expiry": "87600h"
    }
    }
    }
  • etcd-csr.json

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    # cat etcd-csr.json
    {
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "O": "etcd",
    "OU": "etcd Security",
    "L": "Beijing",
    "ST": "Beijing",
    "C": "CN"
    }
    ],
    "CN": "etcd",
    "hosts": [
    "127.0.0.1",
    "localhost",
    "172.21.17.4",
    "172.21.16.231",
    "172.21.16.240"
    ]
    }
  • 接下来执行生成即可

    1
    2
    3
    4
    5
    6
    7
    8
    # cfssl gencert --initca=true etcd-ca-csr.json | cfssljson --bare etcd-ca
    # cfssl gencert --ca etcd-ca.pem --ca-key etcd-ca-key.pem --config etcd-gencert.json etcd-csr.json | cfssljson --bare etcd
    # mkdir -p /etc/etcd/ssl &&mkdir -p /var/lib/etcd
    # cp *.pem /etc/etcd/ssl
    # ls /etc/etcd/ssl/
    etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem
    # scp -r /etc/etcd k8s-master-02:/etc
    # scp -r /etc/etcd k8s-master-03:/etc

2.3、开始配置etcd

2.3.1、下载etcd

1
2
# wget https://github.com/etcd-io/etcd/releases/download/v3.3.15/etcd-v3.3.15-linux-amd64.tar.gz
# tar zxf etcd-v3.3.15-linux-amd64.tar.gz &&cd etcd-v3.3.15-linux-amd64 &&cp -arp etcd* /usr/bin/

2.3.2、创建etcd的Systemd unit 文件

    Etcd 这里采用最新的 3.3.15 版本,安装方式直接复制二进制文件、systemd service 配置即可,不过需要注意相关用户权限问题,以下脚本配置等参考了 etcd rpm 安装包

2.3.3、配置etcd.conf

  • k8s-master-01
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    # cat /etc/etcd/etcd.conf
    # [member]
    ETCD_NAME=etcd1
    ETCD_DATA_DIR="/var/lib/etcd"
    ETCD_SNAPSHOT_COUNT="100"
    ETCD_HEARTBEAT_INTERVAL="100"
    ETCD_ELECTION_TIMEOUT="1000"
    ETCD_LISTEN_PEER_URLS="https://172.21.17.4:2380"
    ETCD_LISTEN_CLIENT_URLS="https://172.21.17.4:2379,http://127.0.0.1:2379"
    ETCD_MAX_SNAPSHOTS="5"
    ETCD_MAX_WALS="5"

    # [cluster]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.17.4:2380"
    ETCD_INITIAL_CLUSTER="etcd1=https://172.21.17.4:2380,etcd2=https://172.21.16.231:2380,etcd3=https://172.21.16.240:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://172.21.17.4:2379"

    # [security]
    ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
    ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
    ETCD_PEER_AUTO_TLS="true"
  • k8s-master-02

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    # cat /etc/etcd/etcd.conf
    # [member]
    ETCD_NAME=etcd2
    ETCD_DATA_DIR="/var/lib/etcd"
    ETCD_SNAPSHOT_COUNT="100"
    ETCD_HEARTBEAT_INTERVAL="100"
    ETCD_ELECTION_TIMEOUT="1000"
    ETCD_LISTEN_PEER_URLS="https://172.21.16.231:2380"
    ETCD_LISTEN_CLIENT_URLS="https://172.21.16.231:2379,http://127.0.0.1:2379"
    ETCD_MAX_SNAPSHOTS="5"
    ETCD_MAX_WALS="5"

    # [cluster]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.16.231:2380"
    ETCD_INITIAL_CLUSTER="etcd1=https://172.21.17.4:2380,etcd2=https://172.21.16.231:2380,etcd3=https://172.21.16.240:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://172.21.16.231:2379"

    # [security]
    ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
    ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
    ETCD_PEER_AUTO_TLS="true"
  • k8s-master-03

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    # cat /etc/etcd/etcd.conf
    # [member]
    ETCD_NAME=etcd3
    ETCD_DATA_DIR="/var/lib/etcd"
    ETCD_SNAPSHOT_COUNT="100"
    ETCD_HEARTBEAT_INTERVAL="100"
    ETCD_ELECTION_TIMEOUT="1000"
    ETCD_LISTEN_PEER_URLS="https://172.21.16.240:2380"
    ETCD_LISTEN_CLIENT_URLS="https://172.21.16.240:2379,http://127.0.0.1:2379"
    ETCD_MAX_SNAPSHOTS="5"
    ETCD_MAX_WALS="5"

    # [cluster]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.16.240:2380"
    ETCD_INITIAL_CLUSTER="etcd1=https://172.21.17.4:2380,etcd2=https://172.21.16.231:2380,etcd3=https://172.21.16.240:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://172.21.16.240:2379"

    # [security]
    ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
    ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
    ETCD_PEER_AUTO_TLS="true"

2.3.4、配置etcd启动文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# cat /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

2.3.5、etcd授权

1
2
3
4
# groupadd -r etcd
# useradd -r -g etcd -d /var/lib/etcd -s /sbin/nologin -c "etcd user" etcd
# chown -R etcd:etcd /etc/etcd && chmod -R 755 /etc/etcd/ssl &&chown -R etcd:etcd /var/lib/etcd
# systemctl daemon-reload && systemctl enable etcd &&systemctl start etcd && systemctl status etcd

2.3.6、验证etcd

由于etcd使用了证书,所以etcd命令需要带上证书

  • 查看成员列表

    1
    2
    3
    4
    # etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/etcd/ssl/etcd-ca.pem member list
    93c04a995ff8aa8: name=etcd3 peerURLs=https://172.21.16.240:2380 clientURLs=https://172.21.16.240:2379 isLeader=false
    7cc4daf6e4db3a8a: name=etcd2 peerURLs=https://172.21.16.231:2380 clientURLs=https://172.21.16.231:2379 isLeader=false
    ec7ea930930d012e: name=etcd1 peerURLs=https://172.21.17.4:2380 clientURLs=https://172.21.17.4:2379 isLeader=true
  • 查看集群状态

    1
    2
    3
    4
    5
    # etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/etcd/ssl/etcd-ca.pem cluster-health
    member 93c04a995ff8aa8 is healthy: got healthy result from https://172.21.16.240:2379
    member 7cc4daf6e4db3a8a is healthy: got healthy result from https://172.21.16.231:2379
    member ec7ea930930d012e is healthy: got healthy result from https://172.21.17.4:2379
    cluster is healthy

3、部署kubernetes

3.1 介绍

    新版本已经越来越趋近全面 TLS + RBAC 配置,所以本次安装将会启动大部分 TLS + RBAC 配置,包括 kube-controler-manager、kube-scheduler 组件不再连接本地 kube-apiserver 的 8080 非认证端口,kubelet 等组件 API 端点关闭匿名访问,启动 RBAC 认证等;为了满足这些认证,需要签署以下证书

3.2、创建CA

3.2.1、创建CA配置文件

  • kubernetes-ca-csr.json集群CA根证书

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # mkdir ssl && cd ssl/
    # cat kubernetes-ca-csr.json
    {
    "CN": "kubernetes",
    "key": {
    "algo": "rsa",
    "size": 4096
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "kubernetes",
    "OU": "System"
    }
    ],
    "ca": {
    "expiry": "87600h"
    }
    }
    • “CN”: Common Name,kube-apiserver 从该证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站合法性;
    • “O”: Organization,kube-apiserver从该证书中提取该字段作为请求用户所属组(Group);
  • kubernetes-gencert.json
    用于生成其他证书的标准

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    # cat kubernetes-gencert.json 
    {
    "signing": {
    "default": {
    "expiry": "87600h"
    },
    "profiles": {
    "kubernetes": {
    "usages": [
    "signing",
    "key encipherment",
    "server auth",
    "client auth"
    ],
    "expiry": "87600h"
    }
    }
    }
    }
  • kube-apiserver-csr.json
    apiserver TLS 认证端口需要的证书

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    # cat kube-apiserver-csr.json 
    {
    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "10.254.0.1",
    "localhost",
    "172.21.16.45",
    "*.master.kubernetes.node",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "kubernetes",
    "OU": "System"
    }
    ]
    }
  • 172.21.16.45: vip地址
  • 如果hosts字段不为空则需要指定授权使用该证书的ip或域名列表,kube-apiserver指定的service-cluster-ip-range网段的第一个ip,如10.254.0.1
  • kube-controller-manager-csr.json
    controller manager 连接 apiserver 需要使用的证书,同时本身 10257 端口也会使用此证书

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    # cat kube-controller-manager-csr.json
    {
    "CN": "system:kube-controller-manager",
    "hosts": [
    "127.0.0.1",
    "localhost",
    "*.master.kubernetes.node"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "system:kube-controller-manager",
    "OU": "System"
    }
    ]
    }
  • kube-scheduler-csr.json
    scheduler连接 apiserver 需要使用的证书,同时本身 10259 端口也会使用此证书

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    # cat kube-scheduler-csr.json
    {
    "CN": "system:kube-scheduler",
    "hosts": [
    "127.0.0.1",
    "localhost",
    "*.master.kubernetes.node"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "system:kube-scheduler",
    "OU": "System"
    }
    ]
    }
  • kube-proxy-csr.json
    proxy 组件连接 apiserver 需要使用的证书

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # cat kube-proxy-csr.json
    {
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "system:kube-proxy",
    "OU": "System"
    }
    ]
    }
  • kubelet-api-admin-csr.json
    apiserver 反向连接 kubelet 组件 10250 端口需要使用的证书(例如执行 kubectl logs)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # cat kubelet-api-admin-csr.json
    {
    "CN": "system:kubelet-api-admin",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "system:kubelet-api-admin",
    "OU": "System"
    }
    ]
    }
  • admin-csr.json
    集群管理员(kubectl)连接 apiserver 需要使用的证书

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # cat admin-csr.json
    {
    "CN": "system:masters",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "system:masters",
    "OU": "System"
    }
    ]
    }

注意: 证书文件里面的CN、O字段,两个比较特殊的字段,基本都是system:开头,是为了匹配RBAC规则,详情参考

3.3、使用命令生成即可

1
2
3
4
5
# cfssl gencert --initca=true kubernetes-ca-csr.json | cfssljson --bare kubernetes-ca

# for targetName in kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet-api-admin admin; do
cfssl gencert --ca kubernetes-ca.pem --ca-key kubernetes-ca-key.pem --config kubernetes-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName;
done

3.4、分发证书

将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器;kubernetes系统的各个组建需要使用tls证书对通信进行加密。

1)、生成的证书ca证书和秘钥文件如下:
  • admin-key.pem
  • admin.pem
  • kube-apiserver-key.pem
  • kube-apiserver.pem
  • kube-controller-manager-key.pem
  • kube-controller-manager.pem
  • kubelet-api-admin-key.pem
  • kubelet-api-admin.pem
  • kube-proxy-key.pem
  • kube-proxy.pem
  • kubernetes-ca-key.pem
  • kubernetes-ca.pem
  • kube-scheduler-key.pem
  • kube-scheduler.pem
3)、证书拷贝
  • master 节点拷贝

    1
    2
    3
    4
    5
    6
    # mkdir -p /etc/kubernetes/ssl
    # cp *.pem /etc/kubernetes/ssl/
    # scp -r /etc/kubernetes k8s-master-02:/etc
    # scp -r /etc/kubernetes k8s-master-03:/etc
    # scp -r /etc/kubernetes node-01:/etc
    # scp -r /etc/kubernetes node-02:/etc
  • 创建目录

    1
    # mkdir -p /var/log/kube-audit && mkdir /var/lib/kubelet -p && mkdir /usr/libexec -p

4、创建kube config文件

    kubelet、kube-proxy等Node机器上的经常与master机器的kube-apiserver进程通信时需要认证和授权;kubernetes 1.4 开始支持有kube-apiserver为客户端生成tls证书的 TLS Bootstrapping功能,这样就不需要为每个客户端生成证书了,该功能当前仅支持为kuelet生成证书;

4.1、生成配置文件

  • bootstrap.kubeconfig kubelet TLS Bootstarp 引导阶段需要使用的配置文件
  • kube-controller-manager.kubeconfig controller manager 组件开启安全端口及 RBAC 认证所需配置
  • kube-scheduler.kubeconfig scheduler 组件开启安全端口及 RBAC 认证所需配置
  • kube-proxy.kubeconfig proxy 组件连接 apiserver 所需配置文件
  • audit-policy.yaml apiserver RBAC 审计日志配置文件
  • bootstrap.secret.yaml kubelet TLS Bootstarp 引导阶段使用 Bootstrap Token 方式引导,需要预先创建此 Token

4.2、创建kubelet bootstrapping kubeconfig文件

在这之前我们需要下载kubernetes 相关的二进制包,把对应的工具和命令拷贝到/usr/bin目录下面;下载二进制包

1
2
# wget https://dl.k8s.io/v1.13.3/kubernetes-server-linux-amd64.tar.gz
# tar zxf kubernetes-server-linux-amd64.tar.gz && cd kubernetes/server/bin
  • master节点拷贝
1
# mv apiextensions-apiserver cloud-controller-manager hyperkube kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubectl kubelet mounter kubeadm /usr/bin/ && cd &&rm -rf kubernetes kubernetes-server-linux-amd64.tar.gz

4.2.1、生成文件bootstrapping

  • master-01
        config 是一个通用配置文件要连接本地的 6443 加密端口;而这个变量将会覆盖 kubeconfig 中指定的master_vip地址172.21.16.45:6443 地址
1
# export KUBE_APISERVER="https://172.21.16.45:6443"
  • 生成 Bootstrap Token
    1
    2
    3
    4
    # BOOTSTRAP_TOKEN_ID=$(head -c 6 /dev/urandom | md5sum | head -c 6)
    # BOOTSTRAP_TOKEN_SECRET=$(head -c 16 /dev/urandom | md5sum | head -c 16)
    # BOOTSTRAP_TOKEN="${BOOTSTRAP_TOKEN_ID}.${BOOTSTRAP_TOKEN_SECRET}"
    # echo "Bootstrap Tokne: ${BOOTSTRAP_TOKEN}"

4.2.2、生成 kubelet tls bootstrap 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# kubectl config set-cluster kubernetes \
--certificate-authority=kubernetes-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

# kubectl config set-credentials "system:bootstrap:${BOOTSTRAP_TOKEN_ID}" \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

# kubectl config set-context default \
--cluster=kubernetes \
--user="system:bootstrap:${BOOTSTRAP_TOKEN_ID}" \
--kubeconfig=bootstrap.kubeconfig

# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

4.2.3、生成 kube-controller-manager 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# kubectl config set-cluster kubernetes \
--certificate-authority=kubernetes-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-controller-manager.kubeconfig

# kubectl config set-credentials "system:kube-controller-manager" \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

# kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

# kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

4.2.4、生成 kube-scheduler 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# kubectl config set-cluster kubernetes \
--certificate-authority=kubernetes-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-scheduler.kubeconfig

# kubectl config set-credentials "system:kube-scheduler" \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig

# kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

# kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

4.2.5、生成 kube-proxy 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# kubectl config set-cluster kubernetes \
--certificate-authority=kubernetes-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

# kubectl config set-credentials "system:kube-proxy" \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

# kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4.2.6、生成 apiserver RBAC 审计配置文件

1
2
3
4
5
6
7
# cat >> audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
EOF

4.2.7、生成 tls bootstrap token secret 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# cat >> bootstrap.secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form "bootstrap-token-<token id>"
name: bootstrap-token-${BOOTSTRAP_TOKEN_ID}
namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: "The default bootstrap token."
# Token ID and secret. Required.
token-id: ${BOOTSTRAP_TOKEN_ID}
token-secret: ${BOOTSTRAP_TOKEN_SECRET}
# Expiration. Optional.
expiration: $(date -d'+2 day' -u +"%Y-%m-%dT%H:%M:%SZ")
# Allowed usages.
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
# auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress
EOF

4.3、复制文件

把刚生成的文件复制到/etc/kubernetes目录下面

1
2
3
4
5
6
7
# master 节点
# cp audit-policy.yaml bootstrap.kubeconfig bootstrap.secret.yaml kube-proxy.kubeconfig kube-scheduler.kubeconfig /etc/kubernetes
# scp -r audit-policy.yaml bootstrap.kubeconfig bootstrap.secret.yaml kube-proxy.kubeconfig kube-scheduler.kubeconfig k8s-master-02:/etc/kubernetes
# scp -r audit-policy.yaml bootstrap.kubeconfig bootstrap.secret.yaml kube-proxy.kubeconfig kube-scheduler.kubeconfig k8s-master-03:/etc/kubernetes
# node 节点
# scp -r bootstrap.kubeconfig kube-proxy.kubeconfig node-01:/etc/kubernetes
# scp -r bootstrap.kubeconfig kube-proxy.kubeconfig node-02:/etc/kubernetes

4.4、处理 ipvs 及依赖

     新版本目前 kube-proxy 组件全部采用 ipvs 方式负载,所以为了 kube-proxy 能正常工作需要预先处理一下 ipvs 配置以及相关依赖(每台 node 都要处理)

1
2
3
4
5
6
# cat >> /etc/sysctl.conf <<EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF
# sysctl -p

kubernetes 中启用 ipvs,详细介绍,官方,参考文献

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# yum -y install ipvsadm
# cat >> /etc/modules <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
EOF

5、配置和启动kube-apiserver

5.1、设置启动文件

  • kube-apiserver.service
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    # cat /usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Service
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    After=etcd.service
    [Service]
    EnvironmentFile=-/etc/kubernetes/apiserver
    ExecStart=/usr/bin/kube-apiserver \
    $KUBE_LOGTOSTDERR \
    $KUBE_LOG_LEVEL \
    $KUBE_ETCD_SERVERS \
    $KUBE_API_ADDRESS \
    $KUBE_API_PORT \
    $KUBELET_PORT \
    $KUBE_ALLOW_PRIV \
    $KUBE_SERVICE_ADDRESSES \
    $KUBE_ADMISSION_CONTROL \
    $KUBE_API_ARGS
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    [Install]

5.2、apiserver配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
# cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=172.21.17.4 --bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--secure-port=6443"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.21.17.4:2379,https://172.21.16.230:2379,https://172.21.16.240:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,Priority,ResourceQuota"

# Add your own!
KUBE_API_ARGS=" --allow-privileged=true \
--anonymous-auth=false \
--alsologtostderr \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-audit/audit.log \
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
--authorization-mode=Node,RBAC \
--client-ca-file=/etc/kubernetes/ssl/kubernetes-ca.pem \
--enable-bootstrap-token-auth \
--enable-garbage-collector \
--enable-logs-handler \
--endpoint-reconciler-type=lease \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-compaction-interval=0s \
--event-ttl=168h0m0s \
--kubelet-https=true \
--kubelet-certificate-authority=/etc/kubernetes/ssl/kubernetes-ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kubelet-api-admin.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kubelet-api-admin-key.pem \
--kubelet-timeout=3s \
--runtime-config=api/all=true \
--service-node-port-range=30000-50000 \
--service-account-key-file=/etc/kubernetes/ssl/kubernetes-ca.pem \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--v=2"
  • –client-ca-file: 定义客户端 CA
  • –endpoint-reconciler-type: master endpoint 策略
  • –kubelet-client-certificate、–kubelet-client-key: master 反向连接 kubelet 使用的证书
  • –service-account-key-file: service account 签名 key(用于有效性验证)
  • –tls-cert-file、–tls-private-key-file: master apiserver 6443 端口证书
    详细参数介绍

5.2.1、启动kube-apiserver

1
2
# systemctl daemon-reload
# systemctl enable kube-apiserver &&systemctl start kube-apiserver &&systemctl status kube-apiserver

5.3、配置kube-controller-manager

创建kube-controller-manager的service配置文件

5.3.1、配置kube-controller-manager启动文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.3.2、配置controller-manager文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# cat /etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=" --address=0.0.0.0 \
--authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--bind-address=0.0.0.0 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/kubernetes-ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/kubernetes-ca-key.pem \
--client-ca-file=/etc/kubernetes/ssl/kubernetes-ca.pem \
--controllers=*,bootstrapsigner,tokencleaner \
--deployment-controller-sync-period=10s \
--experimental-cluster-signing-duration=87600h0m0s \
--enable-garbage-collector=true \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--node-monitor-grace-period=20s \
--node-monitor-period=5s \
--port=10252 \
--pod-eviction-timeout=2m0s \
--requestheader-client-ca-file=/etc/kubernetes/ssl/kubernetes-ca.pem \
--terminated-pod-gc-threshold=50 \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--root-ca-file=/etc/kubernetes/ssl/kubernetes-ca.pem \
--secure-port=10257 \
--service-cluster-ip-range=10.254.0.0/16 \
--service-account-private-key-file=/etc/kubernetes/ssl/kubernetes-ca-key.pem \
--use-service-account-credentials=true \
--v=2"

    controller manager 将不安全端口 10252 绑定到 127.0.0.1 确保 kuebctl get cs 有正确返回;将安全端口 10257 绑定到 0.0.0.0 公开,提供服务调用;由于 controller manager 开始连接 apiserver 的 6443 认证端口,所以需要 –use-service-account-credentials 选项来让 controller manager 创建单独的 service account(默认 system:kube-controller-manager 用户没有那么高权限)

1
2
3
4
5
6
7
# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}

5.3.3、启动kube-controller-manager

1
2
3
# systemctl daemon-reload

# systemctl enable kube-controller-manager &&systemctl start kube-controller-manager &&systemctl status kube-controller-manager

5.4、配置kube-scheduler

创建kube-scheduler的service配置文件

5.4.1、创建kube-scheduler启动文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.4.2、创建scheduler配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# cat /etc/kubernetes/scheduler 
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS=" --address=0.0.0.0 \
--authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--bind-address=0.0.0.0 \
--client-ca-file=/etc/kubernetes/ssl/kubernetes-ca.pem \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--requestheader-client-ca-file=/etc/kubernetes/ssl/kubernetes-ca.pem \
--secure-port=10259 \
--leader-elect=true \
--port=10251 \
--tls-cert-file=/etc/kubernetes/ssl/kube-scheduler.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-scheduler-key.pem \
--v=2"

shceduler 同 controller manager 一样将不安全端口绑定在本地,安全端口对外公开

5.4.3、启动kube-scheduler

1
2
# systemctl daemon-reload
# systemctl enable kube-scheduler &&systemctl start kube-scheduler &&systemctl status kube-scheduler

5.4、验证master节点

1
2
3
4
5
6
7
# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}

至此master节点部署完毕

kubernetes高可用使用haproxy进行代理,haproxy代理安装

node节点安装

坚持原创技术分享,您的支持将鼓励我继续创作!
0%