Winse Blog

走走停停, 熙熙攘攘, 忙忙碌碌, 不知何畏.

k8s在Centos6部署实践

2017-3-17 08:33:56 折腾了大半个月,写点小结。在centos6 + docker-1.7 + k8s-1.2 是能用起来,安装了dashboard、nexus2、harbor,但是对于一些新的东西不能用,并且k8s官网文档不分版本并且没讲明白docker兼容的版本(至少官网文档),感觉人家就是行到自己这里就不行,各种折腾然后到后面是版本问题。docker和k8s在容器大热的当前,版本更新太快了,docker都到1.17了。综上,如果在centos6上面玩玩了解了k8s的概况还是好的,但是真的要用还是升级centos7吧。

configmap-volumes真是好东西,没办法docker-1.7不支持shared volume。

centos6系统比较”老”啊,既没有systemd,也没有docker-engine。网上各种资料要么是原始安装(非bootstrap docker),要么就是在centos7上装的。不太想在系统上做安装,按照kube-deploy的docker-multinode的脚本来进行修改安装,版本不兼容需要开推土机填坑啊,centos6上面的docker才1.7还不能用kubernetes-1.3,dashboard也需要自己安装。

环境描述:

  • cu2: bootstrap(etcd, flannel), main(hyperkube, pause, kubernetes-dashboard)
  • cu4、cu5: bootstrap(flannel), main(hyperkube, pause)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@cu2 ~]# docker -H unix:///var/run/docker-bootstrap.sock ps | grep -v IMAGE | awk '{print $2}' | sort -u
gcr.io/google_containers/etcd-amd64:3.0.4
quay.io/coreos/flannel:v0.6.1-amd64
[root@cu4 ~]# docker -H unix:///var/run/docker-bootstrap.sock ps | grep -v IMAGE | awk '{print $2}' | sort -u
quay.io/coreos/flannel:v0.6.1-amd64

[root@cu2 kubernetes]# docker images
REPOSITORY                                            TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
bigdata                                               v1                  9e30d146824b        38 hours ago        457.2 MB
gcr.io/google_containers/heapster-grafana-amd64       v4.0.2              74d2c72849cc        6 weeks ago         131.5 MB
gcr.io/google_containers/heapster-influxdb-amd64      v1.1.1              55d63942e2eb        6 weeks ago         11.59 MB
gcr.io/google_containers/heapster-amd64               v1.3.0-beta.1       026fb02eca65        6 weeks ago         101.3 MB
gcr.io/google_containers/kubernetes-dashboard-amd64   v1.5.1              9af7d5c61ccf        7 weeks ago         103.6 MB
gcr.io/google_containers/hyperkube-amd64              v1.2.7              1dd7250ed1b3        4 months ago        231.4 MB
quay.io/coreos/flannel                                v0.6.1-amd64        ef86f3a53de0        6 months ago        27.89 MB
gcr.io/google_containers/etcd-amd64                   3.0.4               ef5e89d609f1        6 months ago        39.62 MB
gcr.io/google_containers/kube2sky-amd64               1.15                f93305484d65        10 months ago       29.16 MB
gcr.io/google_containers/etcd-amd64                   2.2.5               a6752fb962b5        10 months ago       30.45 MB
gcr.io/google_containers/skydns-amd64                 1.0                 a925f95d080a        11 months ago       15.57 MB
gcr.io/google_containers/exechealthz-amd64            1.0                 5b9ac190b20c        11 months ago       7.116 MB
gcr.io/google_containers/pause                        2.0                 9981ca1bbdb5        17 months ago       350.2 kB
  • etcd,flannel,和kubernetes-dashboard用的是docker-multinode时的版本。
  • kubelet是1.2的最新版v1.2.7。
  • pause:2.0是启动apiserver、controller容器时自动下载的版本。
  • 新增DNS镜像(2017-3-6 02:07:14)
  • 新增heapster镜像(2017-3-6 17:00:48)

最好每台机器都load所有镜像。

准备

1
2
3
export NO_PROXY="localhost,127.0.0.1,10.0.0.0/8"
export https_proxy=http://localhost:8118/
export http_proxy=http://localhost:8118/

先看操作和效果(看了菜单再看吃不吃)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
## 下载部署脚本 
# https://github.com/winse/docker-hadoop/tree/master/k8s-centos6/docker-multinode

## 防火墙,关闭selinux
# 或者最后面增加 iptables -A INPUT -s 10.0.0.0/8 -j ACCEPT
iptables -I INPUT 1 -s 10.0.0.0/8 -j ACCEPT

## 先把镜像全部下载下来 git pull ...
* 在master节点
[root@cu2 ~]# docker images
REPOSITORY                                            TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
bigdata                                               v1                  9e30d146824b        2 days ago          457.2 MB
redis                                                 3.2.8               c30a7507ec4d        6 days ago          182.9 MB
gcr.io/google_containers/heapster-grafana-amd64       v4.0.2              74d2c72849cc        6 weeks ago         131.5 MB
gcr.io/google_containers/heapster-influxdb-amd64      v1.1.1              55d63942e2eb        6 weeks ago         11.59 MB
gcr.io/google_containers/heapster-amd64               v1.3.0-beta.1       026fb02eca65        6 weeks ago         101.3 MB
gcr.io/google_containers/kubernetes-dashboard-amd64   v1.5.1              9af7d5c61ccf        7 weeks ago         103.6 MB
gcr.io/google_containers/hyperkube-amd64              v1.2.7              1dd7250ed1b3        4 months ago        231.4 MB
quay.io/coreos/flannel                                v0.6.1-amd64        ef86f3a53de0        6 months ago        27.89 MB
gcr.io/google_containers/etcd-amd64                   3.0.4               ef5e89d609f1        6 months ago        39.62 MB
gcr.io/google_containers/kube2sky-amd64               1.15                f93305484d65        10 months ago       29.16 MB
gcr.io/google_containers/etcd-amd64                   2.2.5               a6752fb962b5        10 months ago       30.45 MB
gcr.io/google_containers/skydns-amd64                 1.0                 a925f95d080a        11 months ago       15.57 MB
gcr.io/google_containers/exechealthz-amd64            1.0                 5b9ac190b20c        11 months ago       7.116 MB
gcr.io/google_containers/pause                        2.0                 9981ca1bbdb5        17 months ago       350.2 kB

## 下载kubectl
https://storage.googleapis.com/kubernetes-release/release/v1.2.7/bin/linux/amd64/kubectl 
# https://kubernetes.io/docs/user-guide/prereqs/
# https://kubernetes.io/docs/user-guide/kubectl/kubectl_version/

## 环境变量
# https://kubernetes.io/docs/user-guide/kubeconfig-file/
export KUBECONFIG=/var/lib/kubelet/kubeconfig/kubeconfig.yaml
export PATH=...加kubectl所在文件夹

## 启动MASTER
./master.sh

## 测试效果
curl -fsSL http://localhost:2379/health
curl -s http://localhost:8080/healthz
curl -s http://localhost:8080/api
kubectl get ns
kubectl create namespace kube-system

* 在worker节点
[root@cu3 ~]# docker images
...

## 启动WORKER
MASTER_IP=cu2 ./worker.sh

小状况:在第一次启动master脚本可能会有点问题:setup-files容器运行可能不正常,需要从googleapi下载easy-rsa.tar.gz,可以先手动下载到/root/kube目录,然后运行setup-files。sh脚本。如果不急的话等上一段时间多run几次后好像也能跑起来(囧)

1
2
3
4
5
[root@cu2 ~]# docker exec -ti kube_kubelet_624b2 bash
root@cu2:/# /setup-files.sh IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local

然后再次提交dashboard:
[root@cu2 docker-multinode-centos6]# ./dashboard.sh 

然后启动应用,测试多节点的情况下启动的容器网络能否互通:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
## 运行查看容器
[root@cu2 ~]# kubectl run redis --image=bigdata:v1 -r 5 --command -- /usr/sbin/sshd -D

[root@cu2 ~]# kubectl get pods -o wide
NAME                       READY     STATUS    RESTARTS   AGE       NODE
k8s-master-192.168.0.214   4/4       Running   22         1h        192.168.0.214
k8s-proxy-192.168.0.214    1/1       Running   0          1h        192.168.0.214
redis-2212193268-1789v     1/1       Running   0          1h        192.168.0.174
redis-2212193268-1j4ej     1/1       Running   0          1h        192.168.0.174
redis-2212193268-8dbmq     1/1       Running   0          1h        192.168.0.30
redis-2212193268-a447n     1/1       Running   0          1h        192.168.0.30
redis-2212193268-tu5fl     1/1       Running   0          1h        192.168.0.214

https://kubernetes.io/docs/user-guide/jsonpath/
[root@cu2 ~]# kubectl get pods -o wide -l run=redis -o jsonpath={..podIP}
10.1.75.2 10.1.75.3 10.1.58.3 10.1.58.2 10.1.33.3

## 登录容器
# 用ssh登录
[root@cu2 ~]# kubectl describe pods redis-2212193268-tu5fl | grep IP
IP:             10.1.33.3
[root@cu2 ~]# ssh 10.1.33.3
The authenticity of host '10.1.33.3 (10.1.33.3)' can't be established.
RSA key fingerprint is e5:58:ae:3b:54:c9:bb:0d:4c:9b:bc:fd:04:fe:be:cc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.1.33.3' (RSA) to the list of known hosts.
root@10.1.33.3's password: 
Last login: Sat Mar  4 18:17:51 2017 from 10.1.61.1
[root@redis-2212193268-tu5fl ~]# exit
logout
Connection to 10.1.33.3 closed.

# exec登录
[root@cu2 ~]# kubectl exec -ti redis-2212193268-tu5fl bash
[root@redis-2212193268-tu5fl /]# 

## ping五台机器全部节点的机器都是互通的
[root@redis-2212193268-tu5fl /]# ping 10.1.75.2
PING 10.1.75.2 (10.1.75.2) 56(84) bytes of data.
64 bytes from 10.1.75.2: icmp_seq=1 ttl=60 time=1.15 ms
...
[root@redis-2212193268-tu5fl /]# ping 10.1.75.3
PING 10.1.75.3 (10.1.75.3) 56(84) bytes of data.
64 bytes from 10.1.75.3: icmp_seq=1 ttl=60 time=1.23 ms
...
[root@redis-2212193268-tu5fl /]# ping 10.1.58.3
PING 10.1.58.3 (10.1.58.3) 56(84) bytes of data.
64 bytes from 10.1.58.3: icmp_seq=1 ttl=60 time=1.60 ms
...
[root@redis-2212193268-tu5fl /]# ping 10.1.58.2
PING 10.1.58.2 (10.1.58.2) 56(84) bytes of data.
64 bytes from 10.1.58.2: icmp_seq=1 ttl=60 time=1.39 ms
...
[root@redis-2212193268-tu5fl /]# ping 10.1.33.3         
PING 10.1.33.3 (10.1.33.3) 56(84) bytes of data.
64 bytes from 10.1.33.3: icmp_seq=1 ttl=64 time=0.036 ms
...

全部启动好后dashboard的效果图:

从脚本中学习

官网这份Creating a Custom Cluster from Scratch 看的糊里糊涂,真不是给入门级的同学看的。需要有一定的实践经验才能看的懂。

另辟蹊径,根据docker-multi的启动脚本来拆分学习然后模拟动手实践。在根据 Portable Multi-Node Cluster 文档学习操作的时刻不理解bootstrap docker以及main docker的含义。

这次通过单独运行提取每个函数运行后才理解,其实就相当于跑两个docker应用程序,互相不影响。

1
2
3
4
[root@cu2 ~]# ps aux|grep docker
root      5310  0.0  0.2 645128 19180 pts/1    Sl   13:14   0:01 docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap --exec-root=/var/run/docker-bootstrap
root      5782  1.1  0.5 2788284 43620 pts/1   Sl   13:14   0:23 /usr/bin/docker -d --mtu=1464 --bip=10.1.33.1/24
root     10935  0.0  0.0 103316   896 pts/1    S+   13:47   0:00 grep docker

bootstrap docker启动后,容器etcd和flannel启动都很顺利。

以下的问题都是在自己虚拟机试,弄好后再放到测试环境的。

  • 问题1: 执行docker0网卡重置失败
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@bigdata1 data]# ip link set docker0 down
[root@bigdata1 data]# ip link del docker0
RTNETLINK answers: Operation not supported

[root@bigdata1 data]# ip addr 

删不掉,但是可以修改ip地址来实现相似的效果

ifconfig docker0 ${FLANNEL_SUBNET}
或者 
[root@bigdata1 data]# ip link set dev docker0 mtu 1460
[root@bigdata1 data]# ip addr del 172.17.42.1/16 dev docker0
[root@bigdata1 data]# ip addr add ${FLANNEL_SUBNET} dev docker0
[root@bigdata1 data]# ip link set dev docker0 up
[root@bigdata1 data]# ifconfig # 查看重新分配的IP

先添加参数在前端运行
[root@bigdata1 data]# docker -d --mtu=1472 --bip=10.1.42.1/24

启动
[root@bigdata1 data]# sed -i 's/other_args=/other_args="--mtu=1472 --bip=10.1.42.1/24"/' /etc/sysconfig/docker
[root@bigdata1 data]# service docker start
Starting docker:                                           [确定]
[root@bigdata1 data]# service docker status
docker (pid  4542) 正在运行...
  • 问题2:volumns mount不支持shared
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@bigdata1 data]# echo $KUBELET_MOUNTS
-v /sys:/sys:rw -v /var/run:/var/run:rw -v /run:/run:rw -v /var/lib/docker:/var/lib/docker:rw -v /var/lib/kubelet:/var/lib/kubelet:shared -v /var/log/containers:/var/log/containers:rw

[root@bigdata1 data]# mkdir -p /var/lib/kubelet
[root@bigdata1 data]# mount --bind /var/lib/kubelet /var/lib/kubelet
[root@bigdata1 data]# mount --make-shared /var/lib/kubelet

[root@bigdata1 data]# docker run -d \
>     --net=host \
>     --pid=host \
>     --privileged \
>     --name kube_kubelet_$(kube::helpers::small_sha) \
>     ${KUBELET_MOUNTS} \
>     gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
>     /hyperkube kubelet \
>       --allow-privileged \
>       --api-servers=http://localhost:8080 \
>       --config=/etc/kubernetes/manifests-multi \
>       --cluster-dns=10.0.0.10 \
>       --cluster-domain=cluster.local \
>       ${CNI_ARGS} \
>       ${CONTAINERIZED_FLAG} \
>       --hostname-override=${IP_ADDRESS} \
>       --v=2
Error response from daemon: invalid mode for volumes-from: shared

# 改成z -- 2017-3-16 19:15:57不支持shared,后面会遇到volume的问题!
    KUBELET_MOUNT="-v /var/lib/kubelet:/var/lib/kubelet:z"
  
[root@bigdata1 ~]# echo $KUBELET_MOUNTS
-v /sys:/sys:rw -v /var/run:/var/run:rw -v /run:/run:rw -v /var/lib/docker:/var/lib/docker:rw -v /var/lib/kubelet:/var/lib/kubelet:z -v /var/log/containers:/var/log/containers:rw
  • 问题3:cgroup问题
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Error: failed to run Kubelet: failed to get mounted cgroup subsystems: failed to find cgroup mounts
failed to run Kubelet: failed to get mounted cgroup subsystems: failed to find cgroup mounts

centos7 
[root@k8s docker.service.d]# ll /sys/fs/cgroup/
blkio/            cpuacct/          cpuset/           freezer/          memory/           net_cls,net_prio/ perf_event/       systemd/          
cpu/              cpu,cpuacct/      devices/          hugetlb/          net_cls/          net_prio/         pids/             

centos6
http://wushank.blog.51cto.com/3489095/1203545
[root@bigdata1 bin]# ls /cgroup/
blkio  cpu  cpuacct  cpuset  devices  freezer  memory  net_cls

把/cgroup加入到卷映射路径
  KUBELET_MOUNTS="\
    ${ROOTFS_MOUNT} \
    -v /sys:/sys:rw \
    -v /cgroup:/cgroup:rw \
    -v /var/run:/var/run:rw \
    -v /run:/run:rw \
    -v /var/lib/docker:/var/lib/docker:rw \
    ${KUBELET_MOUNT} \
    -v /var/log/containers:/var/log/containers:rw"
  • 问题4:再说版本,v1.3+的版本在centos6上运行kubelet报错:
1
2
3
[root@bigdata1 ~]# docker logs 7a2f7aec2239
...
E0228 10:56:05.408129    2516 kubelet.go:2049] Container runtime sanity check failed: container runtime version is older than 1.21

1.3以上的版本都会报这个错。kubernetes用1.2.7的版本即可。

  • 问题5:dashboard/dns配置注意点

  • imagePullPolicy 就是个坑啊!改成IfNotPresent https://kubernetes.io/docs/user-guide/images/

  • namespace 也不能改,好像会写数据库然后指定的namespace就是kube-system
  • apiserver 由于没有addon-manager的支持,暂时使用http获取数据(DNS的问题确认了很久,kube2sky容器日志有报错,修改server地址为http方式才解决)
1
2
3
4
5
6
7
[root@cu2 docker-multinode-centos6]# docker exec -ti 193863bc646b bash
[root@redis-2212193268-0ovu7 /]# nslookup kubernetes.default
Server:         10.0.0.10
Address:        10.0.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.0.0.1

处理完以上问题,K8S集群就跑起来了,然后整合成开始用的脚本。当然后续还有很多工作,不仅仅是怎么用,还有一些其他辅助的软件需要配置和安装。

监控

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
可以通过4194访问cAdvisor  <http://www.dockone.io/article/page-46>
http://cu2:4194/containers/

[root@cu2 influxdb]# kubectl create -f ./
deployment "monitoring-grafana" created
service "monitoring-grafana" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created

[root@cu2 influxdb]# kubectl get pods --namespace=kube-system -o wide
NAME                                    READY     STATUS             RESTARTS   AGE       NODE
heapster-2621086088-s77cl               0/1       CrashLoopBackOff   2          37s       192.168.0.148
kube-dns-v8-00p5h                       4/4       Running            1          5h        192.168.0.174
kubernetes-dashboard-2845140353-l7o8o   1/1       Running            0          5h        192.168.0.30
monitoring-grafana-1501214244-kw3im     1/1       Running            0          37s       192.168.0.148
monitoring-influxdb-3498630124-241tx    1/1       Running            0          37s       192.168.0.30

第一次启动heapster失败,定位机器查看日志
[root@cu3 ~]# docker logs aad68dd07ff8
I0306 09:06:25.611251       1 heapster.go:71] /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb:8086
I0306 09:06:25.611523       1 heapster.go:72] Heapster version v1.3.0-beta.1
F0306 09:06:25.611555       1 heapster.go:174] Failed to create source provide: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

https://github.com/kubernetes/heapster/blob/master/docs/source-configuration.md 改成http

重新加载
[root@cu2 influxdb]# for file in * ; do sed -e "s|MASTER_IP|${IP_ADDRESS}|g" $file | kubectl apply -f - ; done
deployment "monitoring-grafana" configured
service "monitoring-grafana" configured
deployment "heapster" configured
service "heapster" configured
deployment "monitoring-influxdb" configured
service "monitoring-influxdb" configured

[root@cu2 influxdb]# kubectl get service --namespace=kube-system -o wide
NAME                   CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE       SELECTOR
heapster               10.0.0.54    <none>        80/TCP          8m        k8s-app=heapster
kube-dns               10.0.0.10    <none>        53/UDP,53/TCP   6h        k8s-app=kube-dns
kubernetes-dashboard   10.0.0.181   nodes         80/TCP          6h        app=kubernetes-dashboard
monitoring-grafana     10.0.0.220   <none>        80/TCP          8m        k8s-app=grafana
monitoring-influxdb    10.0.0.223   <none>        8086/TCP        8m        k8s-app=influxdb

浏览器访问grafana 登录:admin/admin
http://10.0.0.220/

安装好监控后,dashboard也有图标了。

某机器数据不显示问题定位

原来是三台机器的,后面增加了148的机器进来。添加heapster监控后,就148机器图形显示不出来。并且dashboard的 148 Node 页面的 Conditions - Last heartbeat time 没显示内容。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[root@cu2 ~]# kubectl get services --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.0.0.1     <none>        443/TCP         1d
kube-system   heapster               10.0.0.196   <none>        80/TCP          12m
kube-system   kube-dns               10.0.0.10    <none>        53/UDP,53/TCP   21h
kube-system   kubernetes-dashboard   10.0.0.181   nodes         80/TCP          21h
kube-system   monitoring-grafana     10.0.0.215   <none>        80/TCP          12m
kube-system   monitoring-influxdb    10.0.0.226   <none>        8086/TCP        12m

查看接口
https://github.com/kubernetes/heapster/blob/master/docs/debugging.md

  http://10.0.0.196/metrics

  这里没有148机器的key
  http://10.0.0.196/api/v1/model/debug/allkeys

  http://192.168.0.30:10255/stats/container/

https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md

等到heapster机器运行命令,改下端口,日志输出详细点
/ # /heapster --source=kubernetes:http://192.168.0.214:8080?inClusterConfig=false --sink=log --heapster-port=8083 -v 10

  http://192.168.0.214:8080/api/v1/nodes
  Node
  Pod
  Namespace
  
148机器的10255和4194端口都正常运行,heapster从148也获取到数据了。但是最后log输出的时刻没有148机器。系统时间?抱着尝试的心态改了一下,148的机器快了几分钟。

果不其然啊!!同步时间后监控图就显示出来了。

后续学习操作

阿里云的镜像加速还是很赞的,由于我域名是在万网注册的本来就有账号,登录就能看到加速的地址,非常的方便。科技大学的加速镜像也很赞!

1
2
3
4
5
[root@cu1 ~]# cat /etc/sysconfig/docker
...
#other_args=" --registry-mirror=https://us69kjun.mirror.aliyuncs.com "
other_args=" --registry-mirror=https://docker.mirrors.ustc.edu.cn "
...

有趣的命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
https://kubernetes.io/docs/user-guide/jsonpath/
[root@cu2 ~]# kubectl get pods -o wide -l run=redis -o jsonpath={..podIP}
10.1.75.2 10.1.75.3 10.1.58.3 10.1.58.2 10.1.33.3

修改启动entry,以及网络共用
docker run -ti --entrypoint=sh --net=container:8e9f21956469f4ef7e5b9d91798788ab83f380795d2825cdacae0ed28f5ba03b gcr.io/google_containers/skydns-amd64:1.0

https://kubernetes.io/docs/tasks/kubectl/list-all-running-container-images/
[root@cu2 ~]# kubectl get pods --all-namespaces -o jsonpath="{..image}" |\
> tr -s '[[:space:]]' '\n' |\
> sort |\
> uniq -c
      2 gcr.io/google_containers/etcd-amd64:2.2.5
      2 gcr.io/google_containers/exechealthz-amd64:1.0
      2 gcr.io/google_containers/heapster-amd64:v1.3.0-beta.1
      2 gcr.io/google_containers/heapster-grafana-amd64:v4.0.2
      2 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1
     10 gcr.io/google_containers/hyperkube-amd64:v1.2.7
      2 gcr.io/google_containers/kube2sky-amd64:1.15
      2 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
      2 gcr.io/google_containers/skydns-amd64:1.0
      2 redis:3.2.8

kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}"

[root@cu2 ~]# export POD_COL="custom-columns=NAME:.metadata.name,RESTARTS:.status.containerStatuses[*].restartCount,CONTAINERS:.spec.containers[*].name,IP:.status.podIP,HOST:.spec.nodeName"
[root@cu2 ~]# kubectl get pods -o $POD_COL 

# 加label
[root@cu2 ~]# cat /etc/hosts | grep -E "\scu[0-9]\s" | awk '{print "kubectl label nodes "$1" hostname="$2}' | while read line ; do sh -c "$line" ; done

[root@cu2 kubernetes]# kubectl run redis --image=redis:3.2.8 
[root@cu2 kubernetes]# kubectl scale --replicas=9 deployment/redis

其他参考

纯手动安装,所有应用都作为服务启动 * http://chenguomin.blog.51cto.com/8794192/1828905 网络使用flannel、DNS的安装配置 * http://www.pangxie.space/docker/618 * https://xuxinkun.github.io/2016/03/27/k8s-service/ service是在防火墙做的跳转 => iptables -S -t nat

介绍 * http://www.infoq.com/cn/articles/kubernetes-and-cloud-native-applications-part01 * http://www.codingwater.org/2016/08/25/Docker-Kubernetes-Intro/ * https://github.com/kubernetes/kubernetes/tree/v1.0.1/cluster/addons/dns

–END

Comments