istio部署错误解决

前言

        在前面的一篇文章中我做了简单的部署,但是在疏忽bookinfo的时候出现了错误。这个错误不解决,没办法进行下一步。后学的路由规则完全没办法学习和测试。

istio错误解决

         istio的错误查看istio的部署,本次根据这个错误来进行解决。

查看日志

apiserver日志

         这个错误访问k8s的apiserver 问题,应该是超时。我们可以查看apiserver的日志,利用 journalctl 命令来筛选apiserver的日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# journalctl  -u  kube-apiserver  -f

Nov 08 09:59:33 k8s-master-01-3.kxl kube-apiserver[31393]: I1108 09:59:33.659161 31393 trace.go:81] Trace[40457478]: "Create /apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways" (started: 2019-11-08 09:59:03.657132211 +0800 CST m=+328870.679516549) (total time: 30.001964129s):
Nov 08 09:59:33 k8s-master-01-3.kxl kube-apiserver[31393]: Trace[40457478]: [30.001964129s] [30.001043358s] END
Nov 08 09:59:33 k8s-master-01-3.kxl kube-apiserver[31393]: W1108 09:59:33.659790 31393 dispatcher.go:73] Failed calling webhook, failing closed pilot.validation.istio.io: failed calling webhook "pilot.validation.istio.io": Post https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Nov 08 09:59:39 k8s-master-01-3.kxl kube-apiserver[31393]: I1108 09:59:39.979543 31393 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Nov 08 10:00:03 k8s-master-01-3.kxl kube-apiserver[31393]: W1108 10:00:03.764977 31393 dispatcher.go:73] Failed calling webhook, failing closed pilot.validation.istio.io: failed calling webhook "pilot.validation.istio.io": Post https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Nov 08 10:00:03 k8s-master-01-3.kxl kube-apiserver[31393]: I1108 10:00:03.765401 31393 trace.go:81] Trace[1649710078]: "Create /apis/networking.istio.io/v1alpha3/namespaces/default/destinationrules" (started: 2019-11-08 09:59:33.763211641 +0800 CST m=+328900.785596022) (total time: 30.00209862s):
Nov 08 10:00:03 k8s-master-01-3.kxl kube-apiserver[31393]: Trace[1649710078]: [30.00209862s] [30.001534667s] END
Nov 08 10:00:33 k8s-master-01-3.kxl kube-apiserver[31393]: I1108 10:00:33.840606 31393 trace.go:81] Trace[970347589]: "Create /apis/networking.istio.io/v1alpha3/namespaces/weather/virtualservices" (started: 2019-11-08 10:00:03.83792882 +0800 CST m=+328930.860313362) (total time: 30.002612137s):
Nov 08 10:00:33 k8s-master-01-3.kxl kube-apiserver[31393]: Trace[970347589]: [30.002612137s] [30.001075132s] END
Nov 08 10:00:33 k8s-master-01-3.kxl kube-apiserver[31393]: W1108 10:00:33.841663 31393 dispatcher.go:73] Failed calling webhook, failing closed pilot.validation.istio.io: failed calling webhook "pilot.validation.istio.io": Post https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Nov 08 10:00:38 k8s-master-01-3.kxl kube-apiserver[31393]: I1108 10:00:38.260710 31393 trace.go:81] Trace[460935607]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-11-08 10:00:37.644096515 +0800 CST m=+328964.666480867) (total time: 616.515599ms):
Nov 08 10:00:38 k8s-master-01-3.kxl kube-apiserver[31393]: Trace[460935607]: [533.664848ms] [449.34458ms] Transaction prepared
Nov 08 10:00:39 k8s-master-01-3.kxl kube-apiserver[31393]: I1108 10:00:39.986622 31393 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Nov 08 10:01:38 k8s-master-01-3.kxl kube-apiserver[31393]: I1108 10:01:38.780611 31393 trace.go:81] Trace[269873276]: "Get /api/v1/namespaces/default" (started: 2019-11-08 10:01:37.631910347 +0800 CST m=+329024.654294682) (total time: 1.148554735s):
Nov 08 10:01:38 k8s-master-01-3.kxl kube-apiserver[31393]: Trace[269873276]: [1.148211464s] [1.148180236s] About to write a response

istio-pilot日志

1
2
3
4
5
6
7
8
9
10
# kubectl logs istio-pilot-569499d666-rfjsh  -n istio-system discovery
2019-11-08T07:26:14.097765Z info Handling event update for pod istio-security-post-install-1.2.8-c52np in namespace istio-system -> 172.30.112.9
2019-11-08T07:26:27.395268Z info Handling event update for pod istio-security-post-install-1.2.8-c52np in namespace istio-system -> 172.30.112.9
2019-11-08T07:26:38.227484Z info Client received GoAway with http2.ErrCodeEnhanceYourCalm.
2019-11-08T07:26:38.227760Z info pickfirstBalancer: HandleSubConnStateChange: 0xc0001fbaa0, CONNECTING
2019-11-08T07:26:38.228913Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-11-08T07:26:38.230352Z error mcp Error receiving MCP resource: rpc error: code = Unavailable desc = transport is closing
2019-11-08T07:26:38.230387Z error mcp Error receiving MCP response: rpc error: code = Unavailable desc = transport is closing
2019-11-08T07:26:38.235755Z info pickfirstBalancer: HandleSubConnStateChange: 0xc0001fbaa0, READY
2019-11-08T07:26:39.230701Z info mcp (re)trying to establish new MCP sink stream

istio-galley日志

1
2
3
4
5
6
7
8
9
10
# kubectl logs istio-galley-64f7d8cc97-8nbpc  -n istio-system
2019-11-08T07:23:38.860184Z info mcp MCP: connection {addr=172.30.104.7:57190 id=3} ACK collection=istio/rbac/v1alpha1/serviceroles with version="0" nonce="16" inc=false
2019-11-08T07:23:38.860197Z info mcp Watch(): created watch 28 for istio/rbac/v1alpha1/serviceroles from group "default", version "0"
2019-11-08T07:23:38.860217Z info mcp MCP: connection {addr=172.30.104.7:57190 id=3} ACK collection=istio/networking/v1alpha3/gateways with version="0" nonce="17" inc=false
2019-11-08T07:23:38.860268Z info mcp Watch(): created watch 29 for istio/networking/v1alpha3/gateways from group "default", version "0"
2019-11-08T07:26:38.227268Z info transport: Got too many pings from the client, closing the connection.
2019-11-08T07:26:38.227414Z info transport: loopyWriter.run returning. Err: transport: Connection closing
2019-11-08T07:26:38.228857Z info transport: http2Server.HandleStreams failed to read frame: read tcp 172.30.104.4:9901->172.30.104.7:57190: use of closed network connection
2019-11-08T07:26:38.229130Z error mcp MCP: connection {addr=172.30.104.7:57190 id=3}: TERMINATED with errors: rpc error: code = Canceled desc = context canceled
2019-11-08T07:26:38.229162Z info mcp MCP: connection {addr=172.30.104.7:57190 id=3}: CLOSED

        前面其实有一个错误,忘记记录了,是一个tls的证书问题。根据上面的错误在google上找了好久,各种文档都查看了好久。就是安装的时候去验证tls证书,还有什么webhook问题。
        解决办法: 需要在apiserver里面需要配置 enable-admission-plugins,由于在安装的时候指定了某一个插件,导致这个未启用,如果没有配置该插件,默认其实是启用的。两个插件分别是:ValidatingAdmissionWebhook、MutatingAdmissionWebhook。安装文档里面已经修改,参考配置。修改以后需要重启kube-apiserver。

master加入node

        如果master节点未安装kubele、flanneld、docker、kube-proxy。会导致master节点访问不了集群内部的istio-sidecar-injector服务。就会导致自动注入失败。而且当在部署自动注入的时候就会提示: Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
img
在这个错误提示前,在容器里面看看能否访问该地址。是否同

1
curl -vL  -k  https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s

![img]https://img.xxlaila.cn/1D39819D6935D7496D07AC714D17A231.jpg()

这里有大神总结的详细错误,master节点加入node详细参考说明。

验证api-resources

1
2
3
kubectl api-resources | grep admissionregistration
mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration
  • 启用 admissionregistration.k8s.io/v1alpha1 API
    1
    2
    # kubectl api-versions | grep admissionregistration.k8s.io
    admissionregistration.k8s.io/v1beta1

        使用上面命令可以检查当前是否以启用,admissionregistration.k8s.io/v1alpha1 API,若不存在则需要在 apiserver 的配置中添加–runtime-config=admissionregistration.k8s.io/v1alpha1。

重新部署

        新建一个Values.yaml 配置参数文件。下面是参考一个大神的。然后根据自己测试修改的

  • Values.yaml
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    global:
    defaultResources:
    requests:
    cpu: 30m
    memory: 50Mi
    limits:
    cpu: 400m
    memory: 600Mi
    proxy:
    includeIPRanges: 10.244.0.0/16,10.254.0.0/16
    # 是否开启自动注入功能,取值enabled则该pods只要没有被注解为sidecar.istio.io/inject: "false",就会自动注入。如果取值为disabled,则需要为pod设置注解sidecar.istio.io/inject: "true"才会进行注入
    # 如果要使用官方bookinfo来进行测试学习,这个设置为enabled。如果设置为disable的,在部署官方bookinfo的时候则不会部署 `Sidecar (istio-proxy)`。需要自己手动去整。官方默认是开启状态
    autoInject: enabled
    resources:
    requests:
    cpu: 30m
    memory: 50Mi
    limits:
    cpu: 400m
    memory: 500Mi
    mtls:
    enabled: false

    sidecarInjectorWebhook:
    enabled: true
    # 变量为true,就会为所有命名空间开启自动注入功能。如果赋值为false,则只有标签为istio-injection的命名空间才会开启自动注入功能
    enableNamespacesByDefault: false
    rewriteAppHTTPProbe: false

    mixer:
    policy:
    enabled: enabled
    telemetry:
    enabled: true
    resources:
    requests:
    cpu: 100m
    memory: 300Mi
    limits:
    cpu: 1000m
    memory: 1024Mi

    pilot:
    enabled: true
    resources:
    requests:
    cpu: 100m
    memory: 300Mi
    limits:
    cpu: 1000m
    memory: 1024Mi

    gateways:
    enabled: true
    istio-ingressgateway:
    enabled: true
    type: NodePort
    resources:
    requests:
    cpu: 100m
    memory: 128Mi
    limits:
    cpu: 1000m
    memory: 1024Mi
    istio-egressgateway:
    enabled: enabled
    type: NodePort
    resources:
    requests:
    cpu: 100m
    memory: 128Mi
    limits:
    cpu: 1000m
    memory: 256Mi

    tracing:
    enabled: true
    jaeger:
    resources:
    limits:
    cpu: 300m
    memory: 900Mi
    requests:
    cpu: 30m
    memory: 100Mi
    zipkin:
    resources:
    limits:
    cpu: 300m
    memory: 900Mi
    requests:
    cpu: 30m
    memory: 100Mi
    contextPath: /
    ingress:
    enabled: true
    annotations:
    kubernetes.io/ingress.class: traefik
    hosts:
    - istio-tracing.xxlaila.cn

    kiali:
    enabled: true
    resources:
    limits:
    cpu: 300m
    memory: 900Mi
    requests:
    cpu: 30m
    memory: 50Mi
    hub: kiali
    contextPath: /
    ingress:
    enabled: true
    annotations:
    kubernetes.io/ingress.class: traefik
    hosts:
    - istio-kiali.xxlaila.cn
    dashboard:
    grafanaURL: http://grafana:3000
    jaegerURL: http://jaeger-query:16686

    grafana:
    enabled: true
    persist: true
    storageClassName: xxlaila-nfs-storage
    accessMode: ReadWriteMany
    resources:
    requests:
    cpu: 30m
    memory: 50Mi
    limits:
    cpu: 300m
    memory: 500Mi
    contextPath: /
    ingress:
    enabled: true
    annotations:
    kubernetes.io/ingress.class: traefik
    hosts:
    - istio-grafana.xxlaila.cn

    prometheus:
    resources:
    requests:
    cpu: 30m
    memory: 50Mi
    limits:
    cpu: 500m
    memory: 1024Mi
    retention: 3d
    contextPath: /
    ingress:
    enabled: true
    annotations:
    kubernetes.io/ingress.class: traefik
    hosts:
    - istio-prometheus.xxlaila.cn

    istio_cni:
    enabled: false

:

  • istio 访问外部服务,istio网格默认不能访问外部服务,如需要访问外部服务有三种方式
    • global.proxy.includeIPRanges: istio 访问外部服务。指定访问外部的服务ip地址段,直接通过proxy进行访问。默认是* 所有的
    • 创建应用时指定pod annotaion: traffic.sidecar.istio.io/includeOutboundIPRanges: “127.0.0.1/24,10.244.0.1/24”
    • 创建ServiceEntry, 需要通过egressgateway控制访问外部服务,应用场景一般是集群的node不能访问外部网络。如集群可以访问外部网络则不需要

参考文献
官方参数

安装 Istio

  • 部署crds

    1
    # helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
  • 部署istio

    1
    # helm install ./install/kubernetes/helm/istio --name istio --namespace istio-system -f Values.yaml  --host=10.254.156.238:44134

        这里说一下这个 --host参数。在执行helm安装的时候遇到了 portforward.go:178] lost connection to pod, Error: transport is closing。ip是tiller-deploy的ip

1
2
3
# kubectl get svc -n kube-system tiller-deploy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tiller-deploy ClusterIP 10.254.156.238 <none> 44134/TCP 10d
  • 部署kiali登录认证
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    # cat >kiali-secret.yaml <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
    name: kiali
    namespace: istio-system
    labels:
    app: kiali
    type: Opaque
    data:
    username: "YWRtaW4="
    passphrase: "YWRtaW4="
    EOF

    # kubectl apply -f kiali-secret.yaml

        账号密码是admin/admin,可以参考istio部署最前面

查看验证

1
2
3
4
# helm list --all
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
istio 1 Fri Dec 13 09:23:59 2019 DEPLOYED istio-1.4.0 1.4.0 istio-system
istio-init 1 Fri Dec 13 09:22:56 2019 DEPLOYED istio-init-1.4.0 1.4.0 istio-system
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-6df449db94-496vn 1/1 Running 0 59m
istio-citadel-56bc45cd9-9tv99 1/1 Running 0 59m
istio-egressgateway-6646ddf7bd-vskqb 1/1 Running 0 59m
istio-galley-8466db4f-9pjfj 1/1 Running 0 59m
istio-ingressgateway-6ff999fc48-pdwj8 1/1 Running 0 59m
istio-init-crd-10-1.4.0-lzssz 0/1 Completed 0 61m
istio-init-crd-11-1.4.0-gp9cg 0/1 Completed 0 61m
istio-init-crd-14-1.4.0-2md46 0/1 Completed 0 61m
istio-pilot-7dbb475df9-fchzq 2/2 Running 3 59m
istio-policy-f8bb48d59-wsmvb 2/2 Running 3 59m
istio-sidecar-injector-9f4dbd594-r9tm6 1/1 Running 0 59m
istio-telemetry-5c57d8976c-8rmvc 2/2 Running 4 59m
istio-telemetry-5c57d8976c-gt8zt 2/2 Running 0 3m45s
istio-tracing-567bc5c88f-gtpfl 1/1 Running 0 59m
kiali-77b68664b7-pdvck 1/1 Running 0 59m
prometheus-575dbff696-s62dw 1/1 Running 0 59m

部署官方的bookinfo

        使用自动注入。

部署pods 和服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# kubectl label namespace default istio-injection=enabled
namespace/default labeled

# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.254.148.138 <none> 9080/TCP 15s
kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 15d
productpage ClusterIP 10.254.183.24 <none> 9080/TCP 11s
ratings ClusterIP 10.254.185.74 <none> 9080/TCP 15s
reviews ClusterIP 10.254.180.76 <none> 9080/TCP 13s

# kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-c5b5f496d-ztml6 1/1 Running 0 18s
productpage-v1-c7765c886-592sd 0/1 ContainerCreating 0 13s
ratings-v1-f745cf57b-8d7h2 1/1 Running 0 18s
reviews-v1-75b979578c-nrj48 1/1 Running 0 15s
reviews-v2-597bf96c8f-tvc5v 1/1 Running 0 16s
reviews-v3-54c6c64795-75qgp 1/1 Running 0 16s

部署gateway

1
2
3
4
5
6
7
8
9
10
11
12
13
# kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

# kubectl get gateway
NAME AGE
bookinfo-gateway 4s

# kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
destinationrule.networking.istio.io/productpage created
destinationrule.networking.istio.io/reviews created
destinationrule.networking.istio.io/ratings created
destinationrule.networking.istio.io/details created

验证

1
2
3
4
5
6
7
8
# kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-c5b5f496d-75vcp 2/2 Running 0 11m
productpage-v1-c7765c886-lh7hc 2/2 Running 0 11m
ratings-v1-f745cf57b-6hdd9 2/2 Running 0 11m
reviews-v1-75b979578c-n7dvn 2/2 Running 0 11m
reviews-v2-597bf96c8f-fptt2 2/2 Running 0 11m
reviews-v3-54c6c64795-fn74z 2/2 Running 0 11m

        在浏览器打开任意node的ip,http://ip:31380/productpage, istio部署错误解决完成。

规则验证

        在刚开始建立的时候已经设置了默认的目标规则。下面来测试一下官方给的默认规则。
        吧所有的流量迁移到v3版本。需要执行virtual-service-reviews-v3.yaml文件

1
# kubectl apply -f virtual-service-reviews-v3.yaml

img

        设定一个登录某一个用户显示v2版本。默认显示v3版本。需要执行virtual-service-reviews-jason-v2-v3.yaml

1
# kubectl apply -f virtual-service-reviews-jason-v2-v3.yaml

img
img
img

坚持原创技术分享,您的支持将鼓励我继续创作!
0%