helm安装Sentry

文中的--kubeconfig ~/.kube/sentry,是指k8s的配置,添加配置后,能够访问指定k8s,如不须要,自行去除。

1.安装helm

2.设置镜像

helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator
helm repo update
复制代码

3.检测镜像

helm search repo sentry
#NAME                                 CHART VERSION    APP VERSION    DESCRIPTION
#stable/sentry                        4.2.0            9.1.2          Sentry is a cross-platform crash reporting and ...
#看到sentry,说明镜像没问题
复制代码

3.建立k8s命名空间

kubectl create namespace sentry
复制代码

4.安装

helm --kubeconfig ~/.kube/sentry install sentry stable/sentry \
-n sentry \
--set persistence.enabled=true,user.email=ltz@qq.com,user.password=ltz \
--set ingress.enabled=true,ingress.hostname=sentry.ltz.com,service.type=ClusterIP \
--set email.host=smtp.exmail.qq.com,email.port=465 \
--set email.user=ltz@ltz.com,email.password=ltz,email.use_tls=false \
--wait
复制代码

参数说明

说明 必须
--kubeconfig ~/.kube/sentry kube的配置文件,能够指定使用哪一个k8s true
user.email 管理员邮箱 true
user.password 管理员密码 true
ingress.hostname sentry的域名(上报时必须使用域名) true
email.host、email.port 邮箱发站地址、端口 true
email.user、email.password 本身的邮箱(sentry使用这个发送邮件) true
email.use_tls 能够在具体的邮箱设置中查看是否设置true true
redis.primary.persistence.storageClass Redis的SC使用哪一个(也能够不设置,我这个是由于没有PV\PVC) false
postgresql.persistence.storageClass postgresql的SC使用哪一个(也能够不设置,我这个是由于没有PV\PVC) false

若是安装成功,此时,能够看到3个Deployment和三个StatefulSet都启动了。过一会,访问域名就好了。

5.卸载sentry

helm --kubeconfig ~/.kube/sentry uninstall sentry -n sentry
复制代码

6.安装的一个坑

安装后,个人Redis和PG一直启动不起来,提示。node

Pending: pod has unbound immediate PersistentVolumeClaimsweb

大概就是说,PVC绑定不上,因此启动不了。redis

解决方法

1.先卸载Sentry

2.安装SC

yml太长,贴在最后了。sql

在yml同级目录执行

kubectl --kubeconfig ~/.kube/sentry apply -f local-path-storage.yaml 

将local-path这只为默认sc
kubectl --kubeconfig ~/.kube/cls-saas-prod patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
复制代码

3.再次安装sentry

添加参数
helm --kubeconfig ~/.kube/sentry install sentry stable/sentry \
-n sentry \
--set persistence.enabled=true,user.email=ltz@qq.com,user.password=ltz \
--set ingress.enabled=true,ingress.hostname=sentry.ltz.com,service.type=ClusterIP \
--set email.host=smtp.exmail.qq.com,email.port=465 \
--set email.user=ltz@ltz.com,email.password=ltz,email.use_tls=false \
--set redis.primary.persistence.storageClass=local-path \
--set postgresql.persistence.storageClass=local-path \
--wait
复制代码

4.访问域名,正常显示。

7.数据的一个坑。

正常状况下,启动后,会自动初始化数据库信息。然鹅,我这个没有,因此须要登陆到Sentry-web的机器上手动执行下初始化命令。数据库

kubectl --kubeconfig ~/.kube/sentry exec -it -n sentry $(kubectl get pods  -n sentry  |grep sentry-web |awk '{print $1}') bash
sentry upgrade
复制代码

8.管理员的一个坑

同上,管理员若是没自动建立的话,能够在Sentry-web手动执行。json

kubectl exec -it -n sentry $(kubectl get pods  -n sentry  |grep sentry-web |awk '{print $1}') bash
sentry createuser
复制代码

9.Email的一个坑

上面的安装参数,email要写对,而后,在pod中的环境变量,也要配置对。api

sentry-web的环境变量。bash

- name: SENTRY_EMAIL_HOST
  value: smtp.exmail.qq.com
- name: SENTRY_EMAIL_PORT
  value: "465"
- name: SENTRY_EMAIL_USER
  value: ltz@ltz.com
- name: SENTRY_EMAIL_PASSWORD
  valueFrom:
 secretKeyRef:
   key: smtp-password
   name: sentry
   optional: false
- name: SENTRY_EMAIL_USE_TLS
  value: "false"
- name: SENTRY_SERVER_EMAIL
  value: ltz@ltz.com
复制代码

sentry-worker的环境变量markdown

- name: SENTRY_EMAIL_HOST
  value: smtp.exmail.qq.com
- name: SENTRY_EMAIL_PORT
  value: "587"
- name: SENTRY_EMAIL_USER
  value: ltz@ltz.com
- name: SENTRY_EMAIL_PASSWORD
  valueFrom:
    secretKeyRef:
      key: smtp-password
      name: sentry
      optional: false
- name: SENTRY_EMAIL_USE_TLS
  value: "true"
- name: SENTRY_SERVER_EMAIL
  value: ltz@ltz.com
- name: SENTRY_EMAIL_USE_SSL
  value: "false"
复制代码

配置好后,能够发送一封测试邮件,若是没收到,能够查看sentry-worker的日志。

通过测试,SENTRY_SERVER_EMAIL的配置,使用的是sentry-web中的环境变量!修改完后,两个应用都要重启!!

10.local-path.yml(其中的name、namespace按需替换)

apiVersion: v1
kind: Namespace
metadata:
  name: local-path-storage

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-path-provisioner-service-account
  namespace: local-path-storage

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-path-provisioner-role
rules:
  - apiGroups: [ "" ]
    resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
    verbs: [ "get", "list", "watch" ]
  - apiGroups: [ "" ]
    resources: [ "endpoints", "persistentvolumes", "pods" ]
    verbs: [ "*" ]
  - apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "create", "patch" ]
  - apiGroups: [ "storage.k8s.io" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-path-provisioner-bind
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: local-path-provisioner-role
subjects:
  - kind: ServiceAccount
    name: local-path-provisioner-service-account
    namespace: local-path-storage

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: local-path-provisioner
  namespace: local-path-storage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-path-provisioner
  template:
    metadata:
      labels:
        app: local-path-provisioner
    spec:
      serviceAccountName: local-path-provisioner-service-account
      containers:
        - name: local-path-provisioner
          image: rancher/local-path-provisioner:v0.0.19
          imagePullPolicy: IfNotPresent
          command:
            - local-path-provisioner
            - --debug
            - start
            - --config
            - /etc/config/config.json
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config/
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      volumes:
        - name: config-volume
          configMap:
            name: local-path-config

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |- { "nodePathMap":[ { "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES", "paths":["/opt/local-path-provisioner"] } ] }   setup: |- #!/bin/sh while getopts "m:s:p:" opt do case $opt in p) absolutePath=$OPTARG ;; s) sizeInBytes=$OPTARG ;; m) volMode=$OPTARG ;; esac done 
    mkdir -m 0777 -p ${absolutePath}
  teardown: |- #!/bin/sh while getopts "m:s:p:" opt do case $opt in p) absolutePath=$OPTARG ;; s) sizeInBytes=$OPTARG ;; m) volMode=$OPTARG ;; esac done 
    rm -rf ${absolutePath}
  helperPod.yaml: |- apiVersion: v1 kind: Pod metadata: name: helper-pod spec: containers: - name: helper-pod image: busybox imagePullPolicy: IfNotPresent 复制代码
相关文章
相关标签/搜索