ELK日志系统你们不会陌生(zipkin + jaeger , prometheus + grafana)解决了你们对于链路对于统计采集的需求,可是真正的对于日志进行存储仍是得专业的上,在Istio中官方提供的方案是EFK(Fluentd + Elasticsearch + Kibana)Fluentd 是一个开源的日志收集器,支持多种数据输出而且有一个可插拔架构。 Elasticsearch是一个流行的后端日志记录程序, Kibana 用于查看。node
附上:docker
喵了个咪的博客:w-blog.cnjson
Istio官方地址:https://preliminary.istio.io/zhvim
Istio中文文档:https://preliminary.istio.io/zh/docs/后端
PS : 此处基于当前最新istio版本1.0.3版本进行搭建和演示api
咱们把Fluentd,Elasticsearch 和 Kibana 在一个非生产集合 Services 和 Deployments 在一个新的叫作logging的 Namespace 中。架构
> vim logging-stack.yaml # Logging Namespace. All below are a part of this namespace. apiVersion: v1 kind: Namespace metadata: name: logging --- # Elasticsearch Service apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: logging labels: app: elasticsearch spec: ports: - port: 9200 protocol: TCP targetPort: db selector: app: elasticsearch --- # Elasticsearch Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: elasticsearch namespace: logging labels: app: elasticsearch annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: elasticsearch spec: containers: - image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.1 name: elasticsearch resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: discovery.type value: single-node ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: elasticsearch mountPath: /data volumes: - name: elasticsearch emptyDir: {} --- # Fluentd Service apiVersion: v1 kind: Service metadata: name: fluentd-es namespace: logging labels: app: fluentd-es spec: ports: - name: fluentd-tcp port: 24224 protocol: TCP targetPort: 24224 - name: fluentd-udp port: 24224 protocol: UDP targetPort: 24224 selector: app: fluentd-es --- # Fluentd Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: fluentd-es namespace: logging labels: app: fluentd-es annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: fluentd-es spec: containers: - name: fluentd-es image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1 env: - name: FLUENTD_ARGS value: --no-supervisor -q resources: limits: memory: 500Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: config-volume mountPath: /etc/fluent/config.d terminationGracePeriodSeconds: 30 volumes: - name: config-volume configMap: name: fluentd-es-config --- # Fluentd ConfigMap, contains config files. kind: ConfigMap apiVersion: v1 data: forward.input.conf: |- # Takes the messages sent over TCP <source> type forward </source> output.conf: |- <match **> type elasticsearch log_level info include_tag_key true host elasticsearch port 9200 logstash_format true # Set the chunk limits. buffer_chunk_limit 2M buffer_queue_limit 8 flush_interval 5s # Never wait longer than 5 minutes between retries. max_retry_wait 30 # Disable the limit on the number of retries (retry forever). disable_retry_limit # Use multiple threads for processing. num_threads 2 </match> metadata: name: fluentd-es-config namespace: logging --- # Kibana Service apiVersion: v1 kind: Service metadata: name: kibana namespace: logging labels: app: kibana spec: ports: - port: 5601 protocol: TCP targetPort: ui selector: app: kibana --- # Kibana Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kibana namespace: logging labels: app: kibana annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana-oss:6.1.1 resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports: - containerPort: 5601 name: ui protocol: TCP ---
建立资源app
kubectl apply -f logging-stack.yaml
如今有一个正在运行的 Fluentd 守护进程,使用新的日志类型配置 Istio,并将这些日志发送到监听守护进程。elasticsearch
建立一个新的 YAML 文件来保存日志流的配置,Istio 将自动生成并收集。tcp
> vim fluentd-istio.yaml apiVersion: "config.istio.io/v1alpha2" kind: logentry metadata: name: newlog namespace: istio-system spec: severity: '"info"' timestamp: request.time variables: source: source.labels["app"] | source.workload.name | "unknown" user: source.user | "unknown" destination: destination.labels["app"] | destination.workload.name | "unknown" responseCode: response.code | 0 responseSize: response.size | 0 latency: response.duration | "0ms" monitored_resource_type: '"UNSPECIFIED"' --- # fluentd handler 的配置 apiVersion: "config.istio.io/v1alpha2" kind: fluentd metadata: name: handler namespace: istio-system spec: address: "fluentd-es.logging:24224" --- # 发送 logentry 实例到 fluentd handler 的规则 apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: newlogtofluentd namespace: istio-system spec: match: "true" # match for all requests actions: - handler: handler.fluentd instances: - newlog.logentry ---
PS : 处理程序配置中 address: "fluentd-es.logging:24224" 行指向咱们设置的 Fluentd 守护进程示例软件栈。
使其生效
kubectl apply -f fluentd-istio.yaml
咱们先访问如下咱们的示例程序bookinfo,而后老方式经过端口映射访问kibana
kubectl -n logging port-forward $(kubectl -n logging get pod -l app=kibana -o jsonpath='{.items[0].metadata.name}') 5601:5601
PS : 推荐吧ES和kibana单独部署在集群外部,ES对存储和资源有较高要求