traefik 2.0 灰度发布

灰度发布

        Traefik2.0 的一个更强大的功能就是灰度发布,灰度发布我们有时候也会称为金丝雀发布(Canary),主要就是让一部分测试的服务也参与到线上去,经过测试观察看是否符号上线要求。

部署

        这里部署两个nginx服务,通过 Traefik 来控制流量,将3/5的流量到v1 版本,2/5的流量到v2版本。需要利用Traefik2.0中提供的带权重的轮询(WRR)来实现该功能。

nginx资源部署

  • nginx-appv1.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    # cat >nginx-appv1.yaml <<EOF
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: appv1
    namespace: default
    spec:
    selector:
    matchLabels:
    app: appv1
    template:
    metadata:
    labels:
    use: test
    app: appv1
    spec:
    containers:
    - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    lifecycle:
    postStart:
    exec:
    command: ["/bin/sh", "-c", "echo Hello v1 > /usr/share/nginx/html/index.html"]
    ports:
    - containerPort: 80
    name: portv1

    ---

    apiVersion: v1
    kind: Service
    metadata:
    name: appv1
    namespace: default
    spec:
    selector:
    app: appv1
    ports:
    - name: http
    port: 80
    targetPort: portv1

    EOF
  • nginx-appv2.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    # cat > nginx-appv2.yaml <<EOF
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: appv2
    namespace: default
    spec:
    selector:
    matchLabels:
    app: appv2
    template:
    metadata:
    labels:
    use: test
    app: appv2
    spec:
    containers:
    - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    lifecycle:
    postStart:
    exec:
    command: ["/bin/sh", "-c", "echo Hello v2 > /usr/share/nginx/html/index.html"]
    ports:
    - containerPort: 80
    name: portv2

    ---

    apiVersion: v1
    kind: Service
    metadata:
    name: appv2
    namespace: default
    spec:
    selector:
    app: appv2
    ports:
    - name: http
    port: 80
    targetPort: portv2

    EOF

执行创建

1
2
3
4
5
# kubectl apply -f ./
# kubectl get pods -l use=test
NAME READY STATUS RESTARTS AGE
appv1-6f88c7b898-qx2pc 2/2 Running 0 23h
appv2-558fdbbdb7-6gd8l 2/2 Running 0 23h

        前面在安装traefik的时候在crd里面定义了一个TraefikService的crd资源,直接利用这个对象来配置 WRR。

新建资源清单

  • nginx-wrr.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    # cat >nginx-wrr.yaml<<EOF
    ---
    apiVersion: traefik.containo.us/v1alpha1
    kind: TraefikService
    metadata:
    name: app-wrr
    spec:
    weighted:
    services:
    - name: appv1
    weight: 3
    port: 80
    kind: Service
    - name: appv2
    weight: 2
    port: 80
    EOF
  • nginx-ingressroute.yaml
            为灰度发布的服务创建一个 IngressRoute 资源对象。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    # cat >nginx-ingressroute.yaml<<EOF
    apiVersion: traefik.containo.us/v1alpha1
    kind: IngressRoute
    metadata:
    name: wrringressroute
    namespace: default
    spec:
    entryPoints:
    - web
    routes:
    - match: Host(`nginx.xxlaila.cn`)
    kind: Rule
    services:
    - name: app-wrr
    kind: TraefikService
    EOF

:

  • weight: 3: 定义权重
  • kind: Service: 可选,默认就是 Service
  • 执行创建
    1
    2
    # kubectl apply -f nginx-wrr.yam
    # kubectl apply -f nginx-ingressroute.yaml

        在浏览器访问nginx.xxlaila.cn,这里联系访问了5次,有三次请求到appv1,两次请求到appv2。符合3:2权重配置。

1
# kubectl logs -f appv1-6f88c7b898-qx2pc nginx

img

1
# kubectl logs -f appv2-558fdbbdb7-6gd8l nginx

img

流量复制

        Traefik 2.0 还引入了流量镜像服务,是一种可以将流入流量复制并同时将其发送给其他服务的方法,镜像服务可以获得给定百分比的请求同时也会忽略这部分请求的响应。在traefik 2.0 中只能通过 FileProvider 进行配置,在 2.1 版本中可以通过 TraefikService 资源对象来进行配置。

        修改前面的nginx-appv2.yaml 为v1版本。创建一个 IngressRoute 对象,将服务v1的流量复制50%到服务v2。

  • mirror-ingress-route.yaml
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    # cat > mirror-ingress-route.yaml<<EOF
    apiVersion: traefik.containo.us/v1alpha1
    kind: TraefikService
    metadata:
    name: app-mirror
    spec:
    mirroring:
    name: appv1
    port: 80
    mirrors:
    - name: appv2
    percent: 50
    port: 80
    ---
    apiVersion: traefik.containo.us/v1alpha1
    kind: IngressRoute
    metadata:
    name: mirror-ingress-route
    namespace: default
    spec:
    entryPoints:
    - web
    routes:
    - match: Host(`mirror.xxlaila.cn`)
    kind: Rule
    services:
    - name: app-mirror
    kind: TraefikService
    EOF

    # kubectl apply -f mirror-ingress-route.yaml
    ingressroute.traefik.containo.us/mirror-ingress-route created
    traefikservice.traefik.containo.us/mirroring-example created

:

  • mirroring.appv1: 发送 100% 的请求到 K8S 的 Service “v1”
  • mirrors.appv2: 然后复制 50% 的请求到 v2
  • kind: TraefikService: 使用声明的 TraefikService 服务,而不是 K8S 的 Service

        在浏览器进行测试访问mirror.xxlaila.cn,进行6次访问,会有一半的请求会路由到appv2上来

1
# kubectl logs -f appv1-6f88c7b898-qx2pc nginx

img

1
# kubectl logs -f appv2-558fdbbdb7-6gd8l nginx

img

        在安装traefik的时候引用了tracing.zipkin,当在浏览器访问的时候,可以在tracing的界面观察请求的路径
img

TCP

        Traefik2.0 已经支持了TCP服务的,这里以redis、mongodb为例来测试Traefik是如何支持TCP服务。

redis 服务

  • redis.yaml
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    # cat > redis.yaml<<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: redis
    spec:
    template:
    metadata:
    labels:
    app: redis
    spec:
    containers:
    - name: redis
    image: redis:4
    ports:
    - containerPort: 6379
    protocol: TCP

    ---

    apiVersion: v1
    kind: Service
    metadata:
    name: redis
    spec:
    ports:
    - port: 6379
    targetPort: 6379
    selector:
    app: redis
    EOF

    # kubectl apply -f redis.yaml
部署mongo服务
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# cat >mongo.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:4.0
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
selector:
app: mongo
ports:
- port: 27017
EOF

# kubectl apply -f mongo.yaml

暴露 TCP 服务

        由于 Traefik 中使用 TCP 路由配置需要 SNI,而 SNI 又是依赖 TLS 的,所以我们需要配置证书才行,但是如果没有证书的话,我们可以使用通配符 * 进行配置,我们这里创建一个 IngressRouteTCP 类型的 CRD 对象。在之前安装的时候crd文件里面已经加入了对应的crd资源。我在使用traefik 2.1版本的时候写入域名是可以成功的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# cat redis-ingressroute-tcp.yaml 
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: redis-traefik-tcp
namespace: kube-ops
spec:
entryPoints:
- redis
routes:
- match: HostSNI(`redis.ops.xxlaila.cn`)
services:
- name: redis
port: 6379
EOF

# kubectl apply -f redis-ingressroute-tcp.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# cat >mongo-ingressroute-tcp.yaml <<EOF
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mongo-traefik-tcp
namespace: default
spec:
entryPoints:
- mongo
routes:
- match: HostSNI(`mongo.ops.xxlaila.cn`)
services:
- name: mongo
port: 27017
EOF

# kubectl apply -f >mongo-ingressroute-tcp.yaml

        要注意的是这里的entryPoints部分,是根据我们启动的 Traefik 的静态配置中的 entryPoints 来决定的,比如我们可以自己添加一个用于 Redis 的专门的入口点,在安装的时候已经添加。

img
img
img

坚持原创技术分享,您的支持将鼓励我继续创作!
0%