Winse Blog

走走停停, 熙熙攘攘, 忙忙碌碌, 不知何畏.

Try K8s

1. 登录配置主机信息:

1
2
3
4
5
6
7
8
$ hostnamectl --static set-hostname master-1

$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.251.51 master-1
192.168.251.50 node-1

2. 安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cat | bash <<EOF
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache

## docker version:(Version:           18.09.3)
# https://kubernetes.io/docs/setup/release/notes/#external-dependencies
# https://docs.docker.com/install/linux/docker-ce/centos/

yum remove docker \
  docker-client \
  docker-client-latest \
  docker-common \
  docker-latest \
  docker-latest-logrotate \
  docker-logrotate \
  docker-engine

yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io

yum list docker-ce --showduplicates | sort -r

systemctl enable docker
systemctl start docker

systemctl disable firewalld
service firewalld stop

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config 
setenforce 0
EOF

3. 翻墙

需要有在国外的主机!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ssh -NC -D 1080 9.9.9.9 -p 88888

curl --socks5-hostname 127.0.0.1:1080 www.google.com

mkdir /etc/systemd/system/docker.service.d
cat > /etc/systemd/system/docker.service.d/socks5-proxy.conf <<EOF
[Service]
Environment="ALL_PROXY=socks5://127.0.0.1:1080" "NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16"
EOF

systemctl daemon-reload
systemctl restart docker

# cache rpm
sed -i 's/keepcache=0/keepcache=1/' /etc/yum.conf 

4. 安装K8S

https://kubernetes.io/docs/setup/independent/install-kubeadm/

添加repo并增加代理配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
proxy=socks5://127.0.0.1:1080
EOF


  ## yum.conf allows you to have per-repository settings as well as global ([main]) settings, 也可以定义在单个repo的配置里面!
  ##sed '$a\\nproxy=socks5://127.0.0.1:1080' /etc/yum.conf 
  ## proxy=_none_

  
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

5. 配置K8S

5.1 先加载镜像

1
2
3
4
5
6
7
8
9
10
$ kubeadm config images pull
I0409 00:04:13.693615   18479 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0409 00:04:13.694196   18479 version.go:97] falling back to the local client version: v1.14.0
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.0
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.3.10
[config/images] Pulled k8s.gcr.io/coredns:1.3.1

5.2 初始化

1
$ kubeadm init --pod-network-cidr=10.244.0.0/16

会遇到的问题1: https://github.com/kubernetes/kubeadm/issues/610

1
2
3
4
5
6
7
$ journalctl -xeu kubelet
....
Apr 09 00:35:33 docker81 kubelet[24062]: I0409 00:35:33.996517   24062 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Apr 09 00:35:33 docker81 kubelet[24062]: F0409 00:35:33.996923   24062 server.go:265] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap
Apr 09 00:35:33 docker81 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Apr 09 00:35:34 docker81 systemd[1]: Unit kubelet.service entered failed state.
Apr 09 00:35:34 docker81 systemd[1]: kubelet.service failed.

处理:

1
2
3
4
5
6
7
8
9
10
11
$ swapoff -a
$ sed -i '/swap/s/^/#/' /etc/fstab


  # 禁用命令
  sudo swapoff -a
  # 启用命令
  sudo swapon -a
  # 把根目录文件系统设为可读写
  sudo mount -n -o remount,rw /
  

5.3 再次初始化

先清理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
$ 
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

$ kubeadm init --pod-network-cidr=10.244.0.0/16

I0409 05:19:35.856967    3656 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0409 05:19:35.857127    3656 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "master-1" could not be reached
        [WARNING Hostname]: hostname "master-1": lookup master-1 on 192.168.253.254:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.251.51]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master-1 localhost] and IPs [192.168.251.51 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master-1 localhost] and IPs [192.168.251.51 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.506192 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zpf7je.xarawormfaeapib3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.251.51:6443 --token zpf7je.xarawormfaeapib3 \
    --discovery-token-ca-cert-hash sha256:d7ff941542a03645209ad4149e1baa1c40ddad7e9c8296f82fe3bd2a91191f66 

执行添加kubeconfig配置

1
2
3
4
$ 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.4 查看状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ kubectl cluster-info 

Kubernetes master is running at https://192.168.251.51:6443
KubeDNS is running at https://192.168.251.51:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


$ kubectl get pods -n kube-system 
$ kubectl get pods --all-namespaces

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-hcrgw            0/1     Pending   0          100s
kube-system   coredns-fb8b8dccf-zct25            0/1     Pending   0          100s
kube-system   etcd-master-1                      1/1     Running   0          57s
kube-system   kube-apiserver-master-1            1/1     Running   0          47s
kube-system   kube-controller-manager-master-1   1/1     Running   0          62s
kube-system   kube-proxy-p962p                   1/1     Running   3          100s
kube-system   kube-scheduler-master-1            1/1     Running   0          45s

5.5 添加网卡,dns的pod启动需要网络组建的支撑

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system


$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

查看状态,现在coredns也已经启动了

1
2
3
4
5
6
7
8
9
10
11
$ kubectl get pods --all-namespaces

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-hcrgw            1/1     Running   0          8m7s
kube-system   coredns-fb8b8dccf-zct25            1/1     Running   0          8m7s
kube-system   etcd-master-1                      1/1     Running   0          7m24s
kube-system   kube-apiserver-master-1            1/1     Running   0          7m14s
kube-system   kube-controller-manager-master-1   1/1     Running   0          7m29s
kube-system   kube-flannel-ds-amd64-947zx        1/1     Running   0          2m32s
kube-system   kube-proxy-p962p                   1/1     Running   3          8m7s
kube-system   kube-scheduler-master-1            1/1     Running   0          7m12s

6. 安装Dashboard

先解除master不能部署pod的限制,然后部署dashboard:

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl taint nodes --all node-role.kubernetes.io/master-

node/master-1 untainted

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

查看日志,故障定位

1
kubectl describe pod kubernetes-dashboard-5f7b999d65-lt2df -n kube-system

查看状态

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl get pods --all-namespaces

NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-hcrgw                 1/1     Running   0          15m
kube-system   coredns-fb8b8dccf-zct25                 1/1     Running   0          15m
kube-system   etcd-master-1                           1/1     Running   0          14m
kube-system   kube-apiserver-master-1                 1/1     Running   0          14m
kube-system   kube-controller-manager-master-1        1/1     Running   0          15m
kube-system   kube-flannel-ds-amd64-947zx             1/1     Running   0          10m
kube-system   kube-proxy-p962p                        1/1     Running   3          15m
kube-system   kube-scheduler-master-1                 1/1     Running   0          14m
kube-system   kubernetes-dashboard-5f7b999d65-lt2df   1/1     Running   0          6m6s

7. 访问Dashboard

7.1 本地查看

1
2
3
4
$ kubectl proxy
Starting to serve on 127.0.0.1:8001

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

7.2 用户浏览器查看

1* 失败的方法:

disable-filter=true表示禁用请求过滤功能,否则我们的请求会被拒绝,并提示 Forbidden (403) Unauthorized

1
$ kubectl proxy --address=0.0.0.0 --disable-filter=true

可以成功访问到登录界面,但是却无法登录,这是因为Dashboard使用HTTP连接只允许localhost和127.0.0.1进行访问(限制为必须在kubectl执行的机器上访问),而其它地址只允许使用HTTPS。

2* 应该可行方法:(没有试)

Kubernetes API Server新增了 -–anonymous-auth 选项设置为 false,允许匿名请求访问secure port;再使用 --basic-auth-file 配置使用用户名登录。

https://www.okay686.cn/984.html

3* 证书+Token的方法:

3-1 证书

官方文档介绍:

方法0:

申请证书

方法1:

对于API Server来说,它是使用证书进行认证的,我们需要先创建一个证书。首先找到kubectl命令的配置文件,默认情况下为 /etc/kubernetes/admin.conf 已经复制到了 ~/.kube/config 中。然后我们使用client-certificate-data和client-key-data生成一个p12文件,可使用下列命令:

1
2
3
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

最后导入上面生成的p12文件,重新打开浏览器。

方法偷懒2:

What’s causing: forbidden: User “system:anonymous” in some Cloud Providers https://github.com/kubernetes-incubator/apiserver-builder-alpha/issues/225

After reading this: https://kubernetes.io/docs/admin/authentication/#anonymous-requests then I tried this:

1
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

and it solved the problem.

3-2 权限

方法1:创建新的用户

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@docker81 ~]# vi dashboard-admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---
# ------------ role binding ---------------- #
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

[root@docker81 ~]# kubectl create -f dashboard-admin-user.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

[root@docker81 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-28dwk
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: c23340a7-5a70-11e9-b2ca-005056887940

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTI4ZHdrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMjMzNDBhNy01YTcwLTExZTktYjJjYS0wMDUwNTY4ODc5NDAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.uaG_faYzLhiadXfz4XuQ_-X9tdl5exKQjbCK7OJqBFMCYve532O-8jH_zg5E2rgFUQycQUhH_siS_GCi0MoE8mqc-WJwIfaGB6QnLYOFRjvWWNhO_16FH56YaEZxGY2p62OPt4d1O9NK4KZLEcoZNbYYuol_9kBfAj9Imf3ii58TNGZ0WiRigXjLOsJK5P2IPyE4c_rqunsrb_sO1z56jgRTL9qnu2zsby8obJxNZefBnsTgakXnu-P8PwXg0PekLBWQNNr-G7TeiKCpfCGCjHM6gmEKdTjiernFbD1GxOG588pmZfWsFtjNNWuNAlfMe1bXpy2m981taQUTQa3kWQ

访问HTTPS地址:

https://192.168.251.51:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

方法2:源头下手

kubernetes-dashboard.yaml的介绍,现在就理解了为什么其角色的名称为kubernetes-dashboard-minimal。一句话,这个Role的权限不够! 因此,我们可以更改RoleBinding修改为ClusterRoleBinding,并且修改roleRef中的kind和name,使用cluster-admin这个非常牛逼的CusterRole(超级用户权限,其拥有访问kube-apiserver的所有权限)。如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

修改后,重新创建kubernetes-dashboard.yaml,Dashboard就可以拥有访问整个K8S 集群API的权限。

3-3 忽略登录

1
2
3
4
5
kubectl edit deployment/kubernetes-dashboard --namespace=kube-system

      - args:
        - --auto-generate-certificates
        - --enable-skip-login

8. 部署应用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@s1 ~]# kubectl create -f https://k8s.io/docs/tasks/run-application/deployment.yaml
deployment.apps/nginx-deployment created

kubectl describe deployment nginx-deployment
kubectl get pods -l app=nginx

[root@s1 ~]# kubectl describe pod nginx-deployment-76bf4969df-bmslp 

kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
kubectl apply -f https://k8s.io/docs/tutorials/stateless-application/deployment-update.yaml
kubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml

kubectl describe deployment nginx-deployment
kubectl get pods -l app=nginx
kubectl describe pod <pod-name>

[root@s1 ~]# curl 172.17.0.4

kubectl delete deployment nginx-deployment

https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@docker81 ~]# curl localhost:8001/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.193.81:6443"
    }
  ]
}

[root@docker81 ~]# curl localhost:8001/api/v1/namespaces/default/pods
{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/default/pods",
    "resourceVersion": "25607"
  },
  "items": []
}

9. 一些命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
kubectl cluster-info

kubectl get nodes --all-namespaces -o wide

kubectl get pods –namespace=kube-system
kubectl get pod --all-namespaces=true

kubectl describe pods
kubectl describe pod coredns-7748f7f6df-7p58x --namespace=kube-system

kubectl get services kube-dns --namespace=kube-system

kubectl logs -n cattle-system cattle-node-agent-w5rj4

kubectl -n kube-system get secret
kubectl -n kube-system describe secret kubernetes-dashboard-token-zlfj7
kubectl -n kube-system get secret kubernetes-dashboard-token-zlfj7 -o yaml

kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token

kubectl -n kube-system get service kubernetes-dashboard
kubectl -n kube-system get svc kubernetes-dashboard
kubectl -n kube-system get secret admin-token-nwphb -o jsonpath={.data.token}|base64 -d
kubectl get secret $(kubectl get serviceaccount my-admin-user -n kube-system -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" -n kube-system | base64 --decode

kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/alternative/kubernetes-dashboard.yaml 

kubectl -n kube-system edit service kubernetes-dashboard

kubectl -n kube-system delete $(kubectl -n kube-system get pod -o name | grep dashboard)

kubectl delete pod NAME --grace-period=0 --force
  • DNS解析:进入容器执行命令
1
2
[root@k8s-master app]# kubectl exec -it coredns-78fcdf6894-244mp /bin/sh  -n kube-system                         
/ # nslookup kubernetes.default 127.0.0.1

–END

Comments