kubernetes实战篇之建立一个只读权限的用户

系列目录html

上一节咱们讲解到了如何限制用户访问dashboard的权限,这节咱们讲解一个案例:如何建立一个只读权限的用户.node

虽然能够根据实际状况灵活建立各类权限用户,可是实际生产环境中每每只须要两个就好了一个是前面建立的拥有集群全部权限的用户,另外一个是一个拥有只读权限的普通用户.把只读权限分配给开发人员,使得开发人员也能够很清楚地看到本身的项目运行的情况.bootstrap

在进行本章节以前,你们能够思考一下怎么用前面的知识来实现,你们可能都有思路,可是要真正的实现起来也不是一简很是容易的事,可能须要进行多轮修改和测试.实际上,kubernetes里有一个默认的叫做view的clusterrole,它其实就是一个有只读权限的的角色.咱们来看一下这个角色centos

[centos@k8s-master ~]$ kubectl describe clusterrole view
Name:         view
Labels:       kubernetes.io/bootstrapping=rbac-defaults
              rbac.authorization.k8s.io/aggregate-to-edit=true
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                Non-Resource URLs  Resource Names  Verbs
  ---------                                -----------------  --------------  -----
  bindings                                 []                 []              [get list watch]
  configmaps                               []                 []              [get list watch]
  endpoints                                []                 []              [get list watch]
  events                                   []                 []              [get list watch]
  limitranges                              []                 []              [get list watch]
  namespaces/status                        []                 []              [get list watch]
  namespaces                               []                 []              [get list watch]
  persistentvolumeclaims                   []                 []              [get list watch]
  pods/log                                 []                 []              [get list watch]
  pods/status                              []                 []              [get list watch]
  pods                                     []                 []              [get list watch]
  replicationcontrollers/scale             []                 []              [get list watch]
  replicationcontrollers/status            []                 []              [get list watch]
  replicationcontrollers                   []                 []              [get list watch]
  resourcequotas/status                    []                 []              [get list watch]
  resourcequotas                           []                 []              [get list watch]
  serviceaccounts                          []                 []              [get list watch]
  services                                 []                 []              [get list watch]
  controllerrevisions.apps                 []                 []              [get list watch]
  daemonsets.apps                          []                 []              [get list watch]
  deployments.apps/scale                   []                 []              [get list watch]
  deployments.apps                         []                 []              [get list watch]
  replicasets.apps/scale                   []                 []              [get list watch]
  replicasets.apps                         []                 []              [get list watch]
  statefulsets.apps/scale                  []                 []              [get list watch]
  statefulsets.apps                        []                 []              [get list watch]
  horizontalpodautoscalers.autoscaling     []                 []              [get list watch]
  cronjobs.batch                           []                 []              [get list watch]
  jobs.batch                               []                 []              [get list watch]
  daemonsets.extensions                    []                 []              [get list watch]
  deployments.extensions/scale             []                 []              [get list watch]
  deployments.extensions                   []                 []              [get list watch]
  ingresses.extensions                     []                 []              [get list watch]
  networkpolicies.extensions               []                 []              [get list watch]
  replicasets.extensions/scale             []                 []              [get list watch]
  replicasets.extensions                   []                 []              [get list watch]
  replicationcontrollers.extensions/scale  []                 []              [get list watch]
  networkpolicies.networking.k8s.io        []                 []              [get list watch]
  poddisruptionbudgets.policy              []                 []              [get list watch]
[centos@k8s-master ~]$

能够看到,它对拥有的浆糊的访问权限都是get list和和watch,也就是都是不能够进行写操做的权限.这样咱们就能够像最初把用户绑定到cluster-admin同样,新建立一个用户,绑定到默认的view role上.api

kubectl create  sa dashboard-readonly   -n  kube-system
kubectl create  clusterrolebinding dashboard-readonly --clusterrole=view --serviceaccount=kube-system:dashboard-readonly

经过以上命令咱们建立了一个叫做dashboard-readonly的用户,而后把它绑定到view这个role上.咱们能够经过kubectl describe secret -n=kube-system dashboard-readonly-token-随机字符串(能够经过kubectl get secret -n=kube-system把全部的secret都列出来,而后找到具体的那一个)查看dashboard-readonly用户的secret,里面包含token,咱们把token复制到dashboard登录界面登录.bash

img

咱们随便进到一个deployment里面,能够看到,左上角仍然有scale,edit和delete这些权限,其实不用担忧,你若是尝试edit和scale的时候,虽然没有提示,可是操做是不成功的,若是你点击了delete,则会出现一个错误提示,以下图,提示dashboard-readonly用户没有删除的权限app

img

手动建立一个具备真正意义上的只读权限用户

之前咱们经过把用户绑定到view这个角色上建立了一个具备只读权限的用户,可是实际上你会发现,这个用户并非一个彻底意义上的只读权限用户,它是没有cluster级别的一些权限的,好比Nodes,persistent volumes等权限,好比咱们点击左侧的Nodes标签,就会出现如下提示:测试

img

下面咱们来手动建立一个对cluster级别的资源也有只读权限的用户spa

首先,咱们先建立一个名叫做.net

kubectl create  sa dashboard-real-readonly  -n  kube-system

下面咱们来建立一个叫做dashboard-viewonly的clusterrole

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: dashboard-viewonly
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - persistentvolumeclaims
  - pods
  - replicationcontrollers
  - replicationcontrollers/scale
  - serviceaccounts
  - services
  - nodes
  - persistentvolumeclaims
  - persistentvolumes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - bindings
  - events
  - limitranges
  - namespaces/status
  - pods/log
  - pods/status
  - replicationcontrollers/status
  - resourcequotas
  - resourcequotas/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - daemonsets
  - deployments
  - deployments/scale
  - replicasets
  - replicasets/scale
  - statefulsets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - cronjobs
  - jobs
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - deployments/scale
  - ingresses
  - networkpolicies
  - replicasets
  - replicasets/scale
  - replicationcontrollers/scale
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - networkpolicies
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  - volumeattachments
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - clusterrolebindings
  - clusterroles
  - roles
  - rolebindings
  verbs:
  - get
  - list
  - watch

而后把它绑定到dashboard-real-readonly ServiceAccount上

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: dashboard-viewonly
subjects:
- kind: ServiceAccount
  name: dashboard-real-readonly
  namespace: kube-system

后面就是获取这个用户的token进行登录了,咱们已经有屡次讲到过,本章节前面部分也有,你们能够参照一下,这里就再也不赘述了.

相关文章
相关标签/搜索