30选5玩法|福彩30选5开奖结果321|
 

分类:服务器技术

服务器

一看必会系列:k8s 练习22 使用多master高用集群增加nodes

No Comments Kubernetes

 

192.168.10.73 HOST1
192.168.10.73 host1
192.168.10.74 HOST2
192.168.10.74 host2
192.168.10.72 HOST0
192.168.10.72 host0
192.168.10.69 k8s-node1
192.168.10.68 k8s-node3
192.168.10.71 k8s-node2

 

systemctl stop firewalld
systemctl disable firewalld
swapoff -a
#同时永久禁掉swap分区,打开如下文件注释掉swap那一行
sudo vi /etc/fstab

1.modprobe br_netfilter

2.
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
3.
sysctl –system

安装
#国内写法
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

setenforce 0
sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

#保持和master版本一致
yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
systemctl enable –now kubelet

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager –add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装 Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce

systemctl enable docker
systemctl restart kubelet
systemctl restart docker

 

 

 

使用前一章的命令进行加入

kubeadm join 192.168.10.199:6443 –token 4gzmbk.2dlkrzgwjy4gseq9   \
  –discovery-token-ca-cert-hash \
  sha256:37b9f9957e0c8dc00aa3f9445881433f4241a3bd6d5966b8a98e9a58ec71862b

[[email protected] ~]# kubeadm join 192.168.10.199:6443 –token 4gzmbk.2dlkrzgwjy4gseq9     –discovery-token-ca-cert-hash sha256:37b9f9957e0c8dc00aa3f9445881433f4241a3bd6d5966b8a98e9a58ec71862b

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

在master上进行验证
[[email protected] redis_web]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
host0       Ready    master   133m   v1.14.1
host1       Ready    master   128m   v1.14.1
host2       Ready    master   126m   v1.14.1
k8s-node1   Ready    <none>   77m    v1.14.1   #ready为正常

 

在master上创建 svc,rc 进行测试

配置rc
vim frontend-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend-rc
  labels:
    name: frontend-pod-lb
spec:
  replicas: 3
  selector:
    name: frontend-pod-lb
  template:
    metadata:
      labels:
        name: frontend-pod-lb
    spec:
     containers:
     – name: frontend-name
       image: reg.ccie.wang/test/guestbook-php-frontend:latest
       ports:
       – containerPort: 80
       env:
       – name: GET_HOSTS_FROM
         value: "env"

配置svc
vim frontend-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend-svc
  labels:
    name: frontend-pod-lb
spec:
  type: NodePort
  ports:
    – port: 80
      nodePort: 30011
  selector:
    name: frontend-pod-lb

几分钟后进行查看
[[email protected] redis_web]# kubectl get svc,pod -o wide
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE    SELECTOR
service/frontend-svc   NodePort    10.99.213.11   <none>        80:30011/TCP   123m   name=frontend-pod-lb
service/kubernetes     ClusterIP   10.96.0.1      <none>        443/TCP        136m   <none>

NAME                    READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
pod/frontend-rc-9vf45   1/1     Running   0          123m   10.168.36.70   k8s-node1   <none>           <none>
pod/frontend-rc-fpwg8   1/1     Running   0          123m   10.168.36.68   k8s-node1   <none>           <none>
pod/frontend-rc-twbzn   1/1     Running   0          123m   10.168.36.69   k8s-node1   <none>           <none>

运行正常

直接curl 验证
[[email protected] ~]# curl http://192.168.10.69:30011/
<html ng-app="redis">
  <head>
    <title>Guestbook</title>
    <link rel="stylesheet" href="bootstrap.min.css">
    <script src="angular.min.js"></script>
    <script src="controllers.js"></script>
    <script src="ui-bootstrap-tpls.js"></script>
  </head>
  <body ng-controller="RedisCtrl">
    <div style="width: 50%; margin-left: 20px">
      <h2>Guestbook</h2>
    <form>
以上为正常

再增加两个 worker node

查看node状态  三台master 三台worker node
[[email protected] redis_web]# kubectl get node
NAME        STATUS   ROLES    AGE    VERSION
host0       Ready    master   2d5h   v1.14.1
host1       Ready    master   2d5h   v1.14.1
host2       Ready    master   2d5h   v1.14.1
k8s-node1   Ready    <none>   2d4h   v1.14.1
k8s-node2   Ready    <none>   36m    v1.14.1
k8s-node3   Ready    <none>   20m    v1.14.1
[[email protected] redis_web]#

用frontend-rc.yaml   生成rc 进行测试
kubectl delete -f frontend-rc.yaml
kubectl create -f frontend-rc.yaml

每个worker node?#31995;?#24230;一台 rc

[[email protected] redis_web]# kubectl get pod -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
frontend-rc-9l4xl   1/1     Running   0          81s   10.168.36.82     k8s-node1   <none>           <none>
frontend-rc-9pwqw   1/1     Running   0          81s   10.168.169.131   k8s-node2   <none>           <none>
frontend-rc-g8bz9   1/1     Running   0          81s   10.168.107.193   k8s-node3   <none>           <none>
[[email protected] redis_web]#

 

———报错1

Events:
  Type    Reason                   Age                    From                Message
  —-    ——                   —-                   —-                ——-
  Normal  NodeHasSufficientMemory  7m28s (x8 over 7m41s)  kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m28s (x8 over 7m41s)  kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m19s)   kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m19s)   kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasNoDiskPressure
  Normal  Starting                 16s                    kubelet, k8s-node1  Starting kubelet.
  Normal  NodeAllocatableEnforced  16s                    kubelet, k8s-node1  Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  16s                    kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    16s                    kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     16s                    kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasSufficientPID
[[email protected] redis_web]#

Apr 17 03:46:42 k8s-node1 kubelet: E0417 03:46:42.736042    3008 pod_workers.go:190] Error syncing pod 0d3a60f6-60e3-11e9-a41a-0050569642b8 ("kube-proxy-b4l5f_kube-system(0d3a60f6-60e3-11e9-a41a-0050569642b8)"), skipping: failed to "CreatePodSandbox" for "kube-proxy-b4l5f_kube-system(0d3a60f6-60e3-11e9-a41a-0050569642b8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-proxy-b4l5f_kube-system(0d3a60f6-60e3-11e9-a41a-0050569642b8)\" failed: rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Apr 17 03:46:42 k8s-node1 dockerd: time="2019-04-17T03:46:42.735721205-04:00" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Apr 17 03:46:42 k8s-node1 kubelet: W0417 03:46:42.775671    3008 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 17 03:46:42 k8s-node1 kubelet: E0417 03:46:42.894006    3008 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

分析对比发现,是镜像的问题,用本文的镜像下载方式再来一扁就好了

解决:参照k8s 练习 x利用阿里云下载google k8s镜像进行下载

docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-apiserver1.14.1
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-controller-manager1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-scheduler1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxy1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:coredns1.3.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1
docker pull  registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:etcd3.3.10

docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-apiserver1.14.1  k8s.gcr.io/kube-apiserver:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-controller-manager1.14.1     k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-scheduler1.14.1      k8s.gcr.io/kube-scheduler:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxy1.14.1     k8s.gcr.io/kube-proxy:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:coredns1.3.1      k8s.gcr.io/coredns:1.3.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1   k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:etcd3.3.10 k8s.gcr.io/etcd:3.3.10

 

然后执行即可
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
kubeadm join 192.168.10.199:6443 –token 4gzmbk.2dlkrzgwjy4gseq9   \
  –discovery-token-ca-cert-hash sha256:37b9f9957e0c8dc00aa3f9445881433f4241a3bd6d5966b8a98e9a58ec71862b
 
 
—————–报错2  token过期

[[email protected] ~]# kubeadm join 192.168.10.199:6443 –token 4gzmbk.2dlkrzgwjy4gseq9 \
>     –discovery-token-ca-cert-hash sha256:37b9f9957e0c8dc00aa3f9445881433f4241a3bd6d5966b8a98e9a58ec71862b
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.10.199:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.199:6443"
[discovery] Requesting info from "https://192.168.10.199:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.199:6443"
[discovery] Successfully established connection with API Server "192.168.10.199:6443"
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized

 

 

——————报错3  kubelet端口被占用

分析,以前安装过其它版本清理后即可。

[[email protected] ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249     –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
[[email protected] ~]#

------中间广告---------

—————报错3   卡住

[[email protected] ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249     –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn’t validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s

[[email protected] ~]# rpm -qa |grep kube
kubectl-1.13.3-0.x86_64
kubernetes-cni-0.7.5-0.x86_64
kubelet-1.14.1-0.x86_64
kubeadm-1.14.1-0.x86_64

 

解决:

# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2

### Add Docker repository.
yum-config-manager \
  –add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.
yum update && yum install docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker

 

———-报错4

[[email protected] ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249    –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Hostname]: hostname "k8s-node3" could not be reached
    [WARNING Hostname]: hostname "k8s-node3": lookup k8s-node3 on 192.168.10.66:53: no such host

分析  apiserver 找不到对应的hostname 修改每台node hosts

192.168.10.73 HOST1
192.168.10.73 host1
192.168.10.74 HOST2
192.168.10.74 host2
192.168.10.72 HOST0
192.168.10.72 host0
192.168.10.69 k8s-node1
192.168.10.68 k8s-node3
192.168.10.71 k8s-node2

———–报错5

[[email protected] ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249    –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
[[email protected] ~]# vim /etc/fstab
[[email protected] ~]#

关闭 swap

swapoff -a

 

并修改/etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

————-报错6

systemctl status kubelet

Apr 19 06:16:39 k8s-node3 systemd[1]: Unit kubelet.service entered failed state.
Apr 19 06:16:39 k8s-node3 systemd[1]: kubelet.service failed.
[[email protected] ~]# systemctl restart kubelet
[[email protected] ~]# systemctl status kubelet
● kubelet.service – kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2019-04-19 06:16:50 EDT; 1s ago
     Docs: https://kubernetes.io/docs/
  Process: 2588 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 2588 (code=exited, status=255)

查看日志
tail -f /var/log/messages

config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Apr 19 06:18:54 k8s-node3 systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Apr 19 06:18:54 k8s-node3 systemd: Unit kubelet.service entered failed state.
Apr 19 06:18:54 k8s-node3 systemd: kubelet.service failed.

解决:这个是因为 kubeadm  join 还没有执行导致的

——————-报错7

[[email protected] ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249     –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks  卡住

解决,大概率是因为 token 或hash不对的原因.?#19994;?#27491;常的就可以了

1.
kubeadm token list
2.
kubeadm token create
3.
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed ‘s/^.* //’
4.拿到 .2,3的信息拼出,再执行即可

kubeadm join –token <token> <master-ip>:<master-port> –discovery-token-ca-cert-hash sha256:<hash>

 

————报错8  NotReady

[[email protected] ~]# kubectl get nodes
NAME        STATUS     ROLES    AGE     VERSION
host0       Ready      master   2d4h    v1.14.1
host1       Ready      master   2d4h    v1.14.1
host2       Ready      master   2d4h    v1.14.1
k8s-node1   Ready      <none>   2d3h    v1.14.1
k8s-node2   NotReady   <none>   23s     v1.14.1
k8s-node3   NotReady   <none>   8m53s   v1.14.1

看日志,网络插件没装.理论上是自动安装的,那么就可能是镜像没有的原因
ile waiting for connection (Client.Timeout exceeded while awaiting headers)"
Apr 19 07:50:12 k8s-node2 kubelet: W0419 07:50:12.642892   15468 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 19 07:50:12 k8s-node2 kubelet: E0419 07:50:12.796773   15468 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 19 07:50:17 k8s-node2 kubelet: W0419 07:50:17.643069   15468 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 19 07:50:17 k8s-node2 kubelet: E0419 07:50:17.797784   15468 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 19 07:50:22 k8s-node2 kubelet: W0419 07:50:22.643351   15468 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 19 07:50:22 k8s-node2 kubelet: E0419 07:50:22.798903   15468 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

1.缺文件
/etc/cni/net.d

从正常的node上 复制过来
[[email protected] net.d]# ll
total 8
-rw-r–r– 1 root root  528 Apr 17 04:50 10-calico.conflist
-rw-r–r– 1 root root 2565 Apr 17 04:50 calico-kubeconfig
[[email protected] net.d]#

systemctl restart docker
systemctl restart kubelet
即可

 

查看calico.yaml配置
[[email protected] script]# grep image calico.yaml
          image: calico/cni:v3.6.1
          image: calico/cni:v3.6.1
          image: calico/node:v3.6.1
          image: calico/kube-controllers:v3.6.1

下载这个镜像
          image: calico/kube-controllers:v3.6.1
         
    docker pull calico/kube-controllers:v3.6.1
    docker pull calico/node:v3.6.1
    docker pull calico/cni:v3.6.1

—————报错9

pull k8s.gcr.io/pause:3.1 报错
pull k8s.gcr.io/kube-proxy:v1.14.1
Apr 19 08:06:34 k8s-node3 kubelet: E0419 08:06:34.051365  
14060 pod_workers.go:190] Error syncing pod 4ff2b462-629b-11e9-a41a-0050569642b8
("kube-proxy-cw5sf_kube-system(4ff2b462-629b-11e9-a41a-0050569642b8)"),
skipping: failed to "CreatePodSandbox" for
"kube-proxy-cw5sf_kube-system(4ff2b462-629b-11e9-a41a-0050569642b8)"
with CreatePodSandboxError: "CreatePodSandbox for pod
\"kube-proxy-cw5sf_kube-system(4ff2b462-629b-11e9-a41a-0050569642b8)\"
 
failed: rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\":
Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while
waiting for connection (Client.Timeout exceeded while awaiting headers)"     

下载镜像

docker pull registry.cn-shanghai.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1
docker tag  registry.cn-shanghai.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1 k8s.gcr.io/pause:3.1
docker pull registry.cn-shanghai.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxyv1.14.1
docker tag registry.cn-shanghai.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxyv1.14.1 k8s.gcr.io/kube-proxy:v1.14.1

systemctl restart kubelet

恢复正常

一看必会系列:k8s 练习24 监控方案之:heapster 1.6.0+influxdb+grafana详解 更新

No Comments Kubernetes

最新版本  heapster 1.6.0
https://github.com/kubernetes-retired/heapster/releases

HOST* 为master
192.168.10.73 HOST1
192.168.10.73 host1
192.168.10.74 HOST2
192.168.10.74 host2
192.168.10.72 HOST0
192.168.10.72 host0
192.168.10.69 k8s-node1

以下所有操作在 ?#25105;籱aster?#29616;?#34892;

下载
cd /data/soft

wget https://github.com/kubernetes-retired/heapster/archive/v1.6.0-beta.1.tar.gz

tar -zxvf v1.6.0-beta.1.tar.gz

cd /data/soft/heapster-1.6.0-beta.1/deploy/kube-config/influxdb

常规操作,列出镜像,?#35748;?#36733;
参考  k8s 练习6 利用阿里云下载google k8s镜像
[[email protected] influxdb]# grep image *.yaml
grafana.yaml:        image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
heapster.yaml:        image: k8s.gcr.io/heapster-amd64:v1.5.4
heapster.yaml:        imagePullPolicy: IfNotPresent
influxdb.yaml:        image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

以下4个之后都要 create
[[email protected] influxdb]# ll |grep -v bak
total 28
-rw-r–r–. 1 root root 2311 Apr 18 17:21 grafana.yaml
-rw-r–r–. 1 root root  381 Apr 18 18:25 heapster_user.yaml
-rw-r–r–. 1 root root 1208 Apr 18 19:20 heapster.yaml
-rw-r–r–. 1 root root 1004 Apr 18 17:17 influxdb.yaml

部分配置需要进行修改
heapster_user.yaml  这个内容在最后

heapster.yaml 将27行改成
28         – –source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true

grafana.yaml  配置增加 nodePort 下面的对应行号,为方便外面进行访问
               
62 spec:
63   # In a production setup, we recommend accessing Grafana through an external Loadbalancer
64   # or through a public IP.
65   # type: LoadBalancer
66   # You could also use NodePort to expose the service at a randomly-generated port
67   # add
68   type: NodePort
69   ports:
70   – port: 80
71     targetPort: 3000
72     #add
73     nodePort: 30004

#按顺序创建
kubectl create -f heapster_user.yaml
kubectl create -f influxdb.yaml
kubectl create -f heapster.yaml
kubectl create -f grafana.yaml

#查看状态
[[email protected] influxdb]#  kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
heapster               ClusterIP   10.102.35.106    <none>        80/TCP                   62m
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   29h
kubernetes-dashboard   NodePort    10.103.100.103   <none>        443:30001/TCP            26h
monitoring-grafana     NodePort    10.97.150.187    <none>        80:30004/TCP             3h1m
monitoring-influxdb    NodePort    10.100.152.72    <none>        8086:30003/TCP           3h5m

[[email protected] influxdb]# kubectl get pods –namespace=kube-system |tail
kubernetes-dashboard-5694f87d87-dht4w      1/1     Running   0          26h
monitoring-grafana-658976d65f-95swp        1/1     Running   0          3h2m
monitoring-influxdb-866db5f944-mt42h       1/1     Running   0          3h6m
[[email protected] influxdb]#

————grafana配置
下载两个模版
https://grafana.com/dashboards/3649
https://grafana.com/dashboards/3646

访问地址,端口为前面 开放 nodePort 的端口 IP 为node IP

本例为 k8s-node1 的IP
[[email protected] influxdb]# kubectl get pods –namespace=kube-system -o wide|tail -3
kubernetes-dashboard-5694f87d87-dht4w      1/1     Running   0          26h     10.168.36.73    k8s-node1   <none>           <none>
monitoring-grafana-658976d65f-95swp        1/1     Running   0          3h39m   10.168.36.77    k8s-node1   <none>           <none>
monitoring-influxdb-866db5f944-mt42h       1/1     Running   0          3h43m   10.168.36.75    k8s-node1   <none>           <none>

http://192.168.10.69:30004/plugins

点home–>import dashboard–> upload?#38556;?#36733;的json 即可

具体看截图

————-报错1

heapster 看不到图相,查日志发现

kubectl log pod.heapster-5d4bf58946-gwk5d  -n kube-system
E0418 10:06:48.946602       1 reflector.go:190] k8s.io/heapster/metrics/heapster.go:328: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list resource "pods" in API group "" at the cluster scope
E0418 10:06:48.947773       1 reflector.go:190] k8s.io/heapster/metrics/processors/namespace_based_enricher.go:89: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list resource "namespaces" in API group "" at the cluster scope
E0418 10:06:49.945074       1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:30: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list resource "nodes" in API group "" at the cluster scope
E0418 10:06:49.945835       1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:30: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list resource "nodes" in API group "" at the cluster scope

分析,基本?#26723;?#26159;没有权限访问,所以需要配置一个管理用户

配置 集群管理帐户
vim  heapster_user.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: heapster
  name: heapster
  namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
– kind: ServiceAccount
  name: heapster
  namespace: kube-system

应用帐户
[[email protected] influxdb]# kubectl apply -f heapster_user.yaml
Warning: kubectl apply should be used on resource created by either kubectl create –save-config or kubectl apply
serviceaccount/heapster configured
clusterrolebinding.rbac.authorization.k8s.io/heapster created

kubectl delete -f heapster.yaml

kubectl apply -f heapster.yaml

解决
[[email protected] influxdb]# kubectl get pod -n kube-system |grep heap
heapster-5d4bf58946-wblhb                  1/1     Running   0          2m32s
[[email protected] influxdb]#

查日志原故障解决

[[email protected] influxdb]# kubectl log heapster-5d4bf58946-wblhb  -n kube-system
log is DEPRECATED and will be removed in a future version. Use logs instead.
I0418 10:27:04.966389       1 heapster.go:78] /heapster –source=kubernetes:https://kubernetes.default –sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
I0418 10:27:04.966455       1 heapster.go:79] Heapster version v1.5.4
I0418 10:27:04.966673       1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1
I0418 10:27:04.966693       1 configs.go:62] Using kubelet port 10255
I0418 10:27:04.980760       1 influxdb.go:312] created influxdb sink with options: host:monitoring-influxdb.kube-system.svc:8086 user:root db:k8s
I0418 10:27:04.980793       1 heapster.go:202] Starting with InfluxDB Sink
I0418 10:27:04.980799       1 heapster.go:202] Starting with Metric Sink
I0418 10:27:04.989898       1 heapster.go:112] Starting heapster on port 8082
E0418 10:28:05.006213       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.72:10255: failed to get all container stats from Kubelet URL "http://192.168.10.72:10255/stats/container/": Post http://192.168.10.72:10255/stats/container/: dial tcp 192.168.10.72:10255: getsockopt: connection refused
E0418 10:28:05.011846       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.73:10255: failed to get all container stats from Kubelet URL "http://192.168.10.73:10255/stats/container/": Post http://192.168.10.73:10255/stats/container/: dial tcp 192.168.10.73:10255: getsockopt: connection refused
E0418 10:28:05.021833       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.74:10255: failed to get all container stats from Kubelet URL "http://192.168.10.74:10255/stats/container/": Post http://192.168.10.74:10255/stats/container/: dial tcp 192.168.10.74:10255: getsockopt: connection refused

—————-报错2

[[email protected] influxdb]# kubectl log heapster-5d4bf58946-wblhb  -n kube-system
log is DEPRECATED and will be removed in a future version. Use logs instead.

E0418 10:28:05.006213       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.72:10255: failed to get all container stats from Kubelet URL "http://192.168.10.72:10255/stats/container/": Post http://192.168.10.72:10255/stats/container/: dial tcp 192.168.10.72:10255: getsockopt: connection refused
E0418 10:28:05.011846       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.73:10255: failed to get all container stats from Kubelet URL "http://192.168.10.73:10255/stats/container/": Post http://192.168.10.73:10255/stats/container/: dial tcp 192.168.10.73:10255: getsockopt: connection refused
E0418 10:28:05.021833       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.74:10255: failed to get all container stats from Kubelet URL "http://192.168.10.74:10255/stats/container/": Post http://192.168.10.74:10255/stats/container/: dial tcp 192.168.10.74:10255: getsockopt: connection refused

解决

vim heapster.yaml
        #将这条修改为下面一条- –source=kubernetes:https://kubernetes.default  
        – –source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
       
kubectl delete -f heapster.yaml
kubectl apply -f heapster.yaml
kubectl get pod -n kube-system |grep heap
kubectl log heapster-5d9575b66b-jv9t5  -n kube-system

验证
[[email protected] influxdb]# kubectl log heapster-5d9575b66b-jv9t5  -n kube-system
log is DEPRECATED and will be removed in a future version. Use logs instead.
I0418 11:20:25.379427       1 heapster.go:78] /heapster –source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true –sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
I0418 11:20:25.379473       1 heapster.go:79] Heapster version v1.5.4
I0418 11:20:25.379710       1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1
I0418 11:20:25.379729       1 configs.go:62] Using kubelet port 10250
I0418 11:20:25.392192       1 influxdb.go:312] created influxdb sink with options: host:monitoring-influxdb.kube-system.svc:8086 user:root db:k8s
I0418 11:20:25.392219       1 heapster.go:202] Starting with InfluxDB Sink
I0418 11:20:25.392226       1 heapster.go:202] Starting with Metric Sink
I0418 11:20:25.399904       1 heapster.go:112] Starting heapster on port 8082
I0418 11:21:05.137427       1 influxdb.go:274] Created database "k8s" on influxDB server at "monitoring-influxdb.kube-system.svc:8086"

 

参考
https://blog.csdn.net/qq_24513043/article/details/82460759

一看必会系列:k8s 练习24 监控方案之:heapster 1.6.0+influxdb+grafana详解

No Comments Kubernetes

最新版本  heapster 1.6.0
https://github.com/kubernetes-retired/heapster/releases

HOST* 为master
192.168.10.73 HOST1
192.168.10.73 host1
192.168.10.74 HOST2
192.168.10.74 host2
192.168.10.72 HOST0
192.168.10.72 host0
192.168.10.69 k8s-node1

以下所有操作在 ?#25105;籱aster?#29616;?#34892;

下载
cd /data/soft

wget https://github.com/kubernetes-retired/heapster/archive/v1.6.0-beta.1.tar.gz

tar -zxvf v1.6.0-beta.1.tar.gz

cd /data/soft/heapster-1.6.0-beta.1/deploy/kube-config/influxdb

常规操作,列出镜像,?#35748;?#36733;
参考  k8s 练习6 利用阿里云下载google k8s镜像
[[email protected] influxdb]# grep image *.yaml
grafana.yaml:        image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
heapster.yaml:        image: k8s.gcr.io/heapster-amd64:v1.5.4
heapster.yaml:        imagePullPolicy: IfNotPresent
influxdb.yaml:        image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

以下4个之后都要 create
[[email protected] influxdb]# ll |grep -v bak
total 28
-rw-r–r–. 1 root root 2311 Apr 18 17:21 grafana.yaml
-rw-r–r–. 1 root root  381 Apr 18 18:25 heapster_user.yaml
-rw-r–r–. 1 root root 1208 Apr 18 19:20 heapster.yaml
-rw-r–r–. 1 root root 1004 Apr 18 17:17 influxdb.yaml

部分配置需要进行修改
heapster_user.yaml  这个内容在最后

heapster.yaml 将27行改成
28         – –source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true

grafana.yaml  配置增加 nodePort 下面的对应行号
               
62 spec:
63   # In a production setup, we recommend accessing Grafana through an external Loadbalancer
64   # or through a public IP.
65   # type: LoadBalancer
66   # You could also use NodePort to expose the service at a randomly-generated port
67   # add
68   type: NodePort
69   ports:
70   – port: 80
71     targetPort: 3000
72     #add
73     nodePort: 30004

#按顺序创建
kubectl create -f heapster_user.yaml
kubectl create -f influxdb.yaml
kubectl create -f heapster.yaml
kubectl create -f grafana.yaml

#查看状态
[[email protected] influxdb]#  kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
heapster               ClusterIP   10.102.35.106    <none>        80/TCP                   62m
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   29h
kubernetes-dashboard   NodePort    10.103.100.103   <none>        443:30001/TCP            26h
monitoring-grafana     NodePort    10.97.150.187    <none>        80:30004/TCP             3h1m
monitoring-influxdb    NodePort    10.100.152.72    <none>        8086:30003/TCP           3h5m

[[email protected] influxdb]# kubectl get pods –namespace=kube-system |tail
kubernetes-dashboard-5694f87d87-dht4w      1/1     Running   0          26h
monitoring-grafana-658976d65f-95swp        1/1     Running   0          3h2m
monitoring-influxdb-866db5f944-mt42h       1/1     Running   0          3h6m
[[email protected] influxdb]#

————grafana配置
下载两个模版
https://grafana.com/dashboards/3649
https://grafana.com/dashboards/3646

访问地址
http://192.168.10.69:30004/plugins

点home–>import dashboard–> upload?#38556;?#36733;的json 即可

具体看截图

————-报错1

heapster 看不到图相,查日志发现

kubectl log pod.heapster-5d4bf58946-gwk5d  -n kube-system
E0418 10:06:48.946602       1 reflector.go:190] k8s.io/heapster/metrics/heapster.go:328: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list resource "pods" in API group "" at the cluster scope
E0418 10:06:48.947773       1 reflector.go:190] k8s.io/heapster/metrics/processors/namespace_based_enricher.go:89: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list resource "namespaces" in API group "" at the cluster scope
E0418 10:06:49.945074       1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:30: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list resource "nodes" in API group "" at the cluster scope
E0418 10:06:49.945835       1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:30: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list resource "nodes" in API group "" at the cluster scope

分析,基本?#26723;?#26159;没有权限访问,所以需要配置一个管理用户

配置 集群管理帐户
vim  heapster_user.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: heapster
  name: heapster
  namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
– kind: ServiceAccount
  name: heapster
  namespace: kube-system

应用帐户
[[email protected] influxdb]# kubectl apply -f heapster_user.yaml
Warning: kubectl apply should be used on resource created by either kubectl create –save-config or kubectl apply
serviceaccount/heapster configured
clusterrolebinding.rbac.authorization.k8s.io/heapster created

kubectl delete -f heapster.yaml

kubectl apply -f heapster.yaml

解决
[[email protected] influxdb]# kubectl get pod -n kube-system |grep heap
heapster-5d4bf58946-wblhb                  1/1     Running   0          2m32s
[[email protected] influxdb]#

查日志原故障解决

[[email protected] influxdb]# kubectl log heapster-5d4bf58946-wblhb  -n kube-system
log is DEPRECATED and will be removed in a future version. Use logs instead.
I0418 10:27:04.966389       1 heapster.go:78] /heapster –source=kubernetes:https://kubernetes.default –sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
I0418 10:27:04.966455       1 heapster.go:79] Heapster version v1.5.4
I0418 10:27:04.966673       1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1
I0418 10:27:04.966693       1 configs.go:62] Using kubelet port 10255
I0418 10:27:04.980760       1 influxdb.go:312] created influxdb sink with options: host:monitoring-influxdb.kube-system.svc:8086 user:root db:k8s
I0418 10:27:04.980793       1 heapster.go:202] Starting with InfluxDB Sink
I0418 10:27:04.980799       1 heapster.go:202] Starting with Metric Sink
I0418 10:27:04.989898       1 heapster.go:112] Starting heapster on port 8082
E0418 10:28:05.006213       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.72:10255: failed to get all container stats from Kubelet URL "http://192.168.10.72:10255/stats/container/": Post http://192.168.10.72:10255/stats/container/: dial tcp 192.168.10.72:10255: getsockopt: connection refused
E0418 10:28:05.011846       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.73:10255: failed to get all container stats from Kubelet URL "http://192.168.10.73:10255/stats/container/": Post http://192.168.10.73:10255/stats/container/: dial tcp 192.168.10.73:10255: getsockopt: connection refused
E0418 10:28:05.021833       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.74:10255: failed to get all container stats from Kubelet URL "http://192.168.10.74:10255/stats/container/": Post http://192.168.10.74:10255/stats/container/: dial tcp 192.168.10.74:10255: getsockopt: connection refused

—————-报错2

[[email protected] influxdb]# kubectl log heapster-5d4bf58946-wblhb  -n kube-system
log is DEPRECATED and will be removed in a future version. Use logs instead.

E0418 10:28:05.006213       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.72:10255: failed to get all container stats from Kubelet URL "http://192.168.10.72:10255/stats/container/": Post http://192.168.10.72:10255/stats/container/: dial tcp 192.168.10.72:10255: getsockopt: connection refused
E0418 10:28:05.011846       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.73:10255: failed to get all container stats from Kubelet URL "http://192.168.10.73:10255/stats/container/": Post http://192.168.10.73:10255/stats/container/: dial tcp 192.168.10.73:10255: getsockopt: connection refused
E0418 10:28:05.021833       1 manager.go:101] Error in scraping containers from kubelet:192.168.10.74:10255: failed to get all container stats from Kubelet URL "http://192.168.10.74:10255/stats/container/": Post http://192.168.10.74:10255/stats/container/: dial tcp 192.168.10.74:10255: getsockopt: connection refused

解决

vim heapster.yaml
        #将这条修改为下面一条- –source=kubernetes:https://kubernetes.default  
        – –source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
       
kubectl delete -f heapster.yaml
kubectl apply -f heapster.yaml
kubectl get pod -n kube-system |grep heap
kubectl log heapster-5d9575b66b-jv9t5  -n kube-system

验证
[[email protected] influxdb]# kubectl log heapster-5d9575b66b-jv9t5  -n kube-system
log is DEPRECATED and will be removed in a future version. Use logs instead.
I0418 11:20:25.379427       1 heapster.go:78] /heapster –source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true –sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
I0418 11:20:25.379473       1 heapster.go:79] Heapster version v1.5.4
I0418 11:20:25.379710       1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1
I0418 11:20:25.379729       1 configs.go:62] Using kubelet port 10250
I0418 11:20:25.392192       1 influxdb.go:312] created influxdb sink with options: host:monitoring-influxdb.kube-system.svc:8086 user:root db:k8s
I0418 11:20:25.392219       1 heapster.go:202] Starting with InfluxDB Sink
I0418 11:20:25.392226       1 heapster.go:202] Starting with Metric Sink
I0418 11:20:25.399904       1 heapster.go:112] Starting heapster on port 8082
I0418 11:21:05.137427       1 influxdb.go:274] Created database "k8s" on influxDB server at "monitoring-influxdb.kube-system.svc:8086"

 

参考
https://blog.csdn.net/qq_24513043/article/details/82460759

一看必会系列:k8s 练习23 多master高用集群1.14.1增加dashboard 1.10.1

No Comments Kubernetes

 

1.安装 kubernetes-dashboard

Images 列表,按前几章的方式进行获取
k8s.gcr.io/kubernetes-dashboard-arm64:v1.10.1
k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
k8s.gcr.io/kubernetes-dashboard-ppc64le:v1.10.1
k8s.gcr.io/kubernetes-dashboard-arm:v1.10.1
k8s.gcr.io/kubernetes-dashboard-s390x:v1.10.1

?#35748;?#36733; 再apply
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

修改配置,这样就可以直接访问,本文提供对应行数

149 # ——————- Dashboard Service ——————- #
150
151 kind: Service
152 apiVersion: v1
153 metadata:
154   labels:
155     k8s-app: kubernetes-dashboard
156   name: kubernetes-dashboard
157   namespace: kube-system
158 spec:
159   type: NodePort #加
160   ports:
161     – port: 443
162       targetPort: 8443
163       nodePort: 30001 #加,提供对外访问

随便在哪个master?#29616;?#34892;
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

kubectl apply -f kubernetes-dashboard.yaml
过程
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

 

随便在哪个master?#29616;?#34892;查看,确认 kube-dashboard 运行在哪个node
#这里运行在node1上面
[[email protected] script]# kubectl get pod -n kube-system -o wide |grep dash
kubernetes-dashboard-5694f87d87-8295d      1/1     Running   0          5m22s   10.168.36.72    k8s-node1   <none>           <none>
[[email protected] script]#

2,登陆 kube-dashboard

#创建管理用户
admin-user.yaml 此内容不用改直接复制
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard  #要管理的app和 上面155行配置一致
  name: admin
  namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
– kind: ServiceAccount
  name: admin
  namespace: kube-system

 

[[email protected] script]# vim admin-user.yaml
[[email protected] script]# kubectl apply -f admin-user.yaml   #执行
serviceaccount/admin created
clusterrolebinding.rbac.authorization.k8s.io/admin created
[[email protected] script]# kubectl describe serviceaccount admin -n kube-system
Name:                admin
Namespace:           kube-system
Mountable secrets:   admin-token-8z8rt
Tokens:              admin-token-8z8rt
—–略–下面就是登陆token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi04ejhydCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjVhZDVkMTU5LTYxMDItMTFlOS1hNDFhLTAwNTA1Njk2NDJiOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.LotrTsWwExcOJ3WZcEIke9LeWI-XqHV89VaLi_LJ69qOe9UuTxrQpnQ3owcZ1Y7Q0OLOHi90o4sa2y89gzzPSRI2_jT3FWPERFyWEb0hn-9cFmTDLfURboUiWDJbTL4p2z5ul990eFdIPpzyigQGbq7TFdNSUVr9YaeuHHKAr5zvzjjpsTEyXJgGP1bxido-kPnl58lYT9Qvbwt58kIh7f85uICls6Xfc16Qj2GWpjlJl4_M4P_9RVeKzFI_H3dnaloOPLkHIgjyA445qltmKvrlfT8_Fn7aRe5IIC117PcN1dYGaqBC93VTaMa2wAaeuK-OObqM31FVcBz8YJsWJw
[[email protected] script]#

3.打开页面登陆    https://node1 IP:nodePort

https://192.168.10.69:30001/#!/overview?namespace=default

成功

 

image

————知识扩殿
https://github.com/kubernetes/dashboard/wiki/Installation
https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard—1.7.X-and-above

一看必会系列:k8s 练习20 使用kubeadm创建单master集群1.14.1

No Comments Kubernetes

前体
systemctl stop firewalld
systemctl disable firewalld
swapoff -a
#同时永久禁掉swap分区,打开如下文件注释掉swap那一行
sudo vi /etc/fstab

#国内写法
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

setenforce 0
sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes

systemctl enable –now kubelet

1.modprobe br_netfilter

2.
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
3.
sysctl –system

 

 

#如果安装错误,可以用这个命令重置
kubeadm reset

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

1.?#35748;?#36733;相关镜像

查看需要哪些镜像
[[email protected] ~]# grep image /etc/kubernetes/manifests/*
/etc/kubernetes/manifests/etcd.yaml:    image: k8s.gcr.io/etcd:3.3.10
/etc/kubernetes/manifests/etcd.yaml:    imagePullPolicy: IfNotPresent
/etc/kubernetes/manifests/kube-apiserver.yaml:    image: k8s.gcr.io/kube-apiserver:v1.14.1
/etc/kubernetes/manifests/kube-apiserver.yaml:    imagePullPolicy: IfNotPresent
/etc/kubernetes/manifests/kube-controller-manager.yaml:    image: k8s.gcr.io/kube-controller-manager:v1.14.1
/etc/kubernetes/manifests/kube-controller-manager.yaml:    imagePullPolicy: IfNotPresent
/etc/kubernetes/manifests/kube-scheduler.yaml:    image: k8s.gcr.io/kube-scheduler:v1.14.1
/etc/kubernetes/manifests/kube-scheduler.yaml:    imagePullPolicy: IfNotPresent

解决:参照k8s 练习 x利用阿里云下载google k8s镜像进行下载

docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-apiserver1.14.1
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-controller-manager1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-scheduler1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxy1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:coredns1.3.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1
docker pull  registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:etcd3.3.10

docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-apiserver1.14.1  k8s.gcr.io/kube-apiserver:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-controller-manager1.14.1     k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-scheduler1.14.1      k8s.gcr.io/kube-scheduler:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxy1.14.1     k8s.gcr.io/kube-proxy:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:coredns1.3.1      k8s.gcr.io/coredns:1.3.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1   k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:etcd3.3.10 k8s.gcr.io/etcd:3.3.10

2.
#初始化
kubeadm init

执行 kubeadm init 时,会先请求 https://dl.k8s.io/release/stable-1.txt 获取最新稳定的版本号,
该地址?#23548;?#20250;跳转到 https://storage.googleapis.com/kubernetes-release/release/stable-1.txt
在写本文时此时的返回值为 v1.14.1。由于被墙无法请求该地址,为了避免这个问题,我们可以直接指定要获取的版本,执行下面的命令:

这里建议指定下 –pod-network-cidr=10.168.0.0/16 默认?#30446;?#33021;和现在网络冲突

kubeadm init –kubernetes-version=v1.14.1 –pod-network-cidr=10.168.0.0/16

提示进行下面操作

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

如要部署网络可以用以下命令
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

使用如下命令部署calico
wget   https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
sed -i ‘s/192.168.0.0/10.168.0.0/g’ calico.yaml
kubectl apply -f calico.yaml

#下面其它节点用来加入集群的命令
kubeadm join 192.168.10.72:6443 –token ptxgf1.hzulb340o8qs3npk \
    –discovery-token-ca-cert-hash sha256:a82ff8a6d7b438c3eedb065e9fb9a8e3d46146a5d6d633b35862b703f1a0a285

#具体参考 https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#join-nodes

移除垃圾nodes
[[email protected] script]# kubectl taint nodes –all node-role.kubernetes.io/master-
node/host0 untainted  #这个显示为正常

网段确认10.168.0.0/16
[[email protected] script]# kubectl get pod -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
calico-kube-controllers-5cbcccc885-5klll   1/1     Running   0          28s     10.168.150.2    host0   <none>           <none>
calico-node-4k2ph                          1/1     Running   0          28s     192.168.10.72   host0   <none>           <none>
coredns-fb8b8dccf-jjw8n                    0/1     Running   0          4m4s    10.168.150.3    host0   <none>           <none>
coredns-fb8b8dccf-nfvwt                    1/1     Running   0          4m3s    10.168.150.1    host0   <none>           <none>
etcd-host0                                 1/1     Running   0          3m2s    192.168.10.72   host0   <none>           <none>
kube-apiserver-host0                       1/1     Running   0          2m59s   192.168.10.72   host0   <none>           <none>
kube-controller-manager-host0              1/1     Running   0          3m8s    192.168.10.72   host0   <none>           <none>
kube-proxy-h8xnf                           1/1     Running   0          4m4s    192.168.10.72   host0   <none>           <none>
kube-scheduler-host0                       1/1     Running   0          2m58s   192.168.10.72   host0   <none>           <none>

 

以下是过程
[[email protected] script]# wget   https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
[[email protected] script]# sed -i ‘s/192.168.0.0/10.168.0.0/g’ calico.yaml
[[email protected] script]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[[email protected] script]#

————–报错coredns Pending

[[email protected] script]# kubectl get pod
No resources found.
[[email protected] script]# kubectl get pod -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-24grq         0/1     Pending   0          3m35s  #没有部署网络
coredns-fb8b8dccf-7zxw4         0/1     Pending   0          3m35s
etcd-host0                      1/1     Running   0          2m42s
kube-apiserver-host0            1/1     Running   0          2m45s
kube-controller-manager-host0   1/1     Running   0          2m30s
kube-proxy-rdp2t                1/1     Running   0          3m35s
kube-scheduler-host0            1/1     Running   0          2m20s
[[email protected] script]#

部署网络即可
用以下命令
wget   https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
sed -i ‘s/192.168.0.0/10.168.0.0/g’ calico.yaml
kubectl apply -f calico.yaml


kubectl apply -f \
> https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

查看网段
[[email protected] script]# ip a | tail -4
9: [email protected]: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.168.150.0/32 brd 10.168.150.0 scope global tunl0
       valid_lft forever preferred_lft forever
[[email protected] script]#

———–知识扩展1
1
Quickstart for Calico on Kubernetes

https://docs.projectcalico.org/v3.6/getting-started/kubernetes/

———–知识扩展2
2
token 重新创建,默?#29616;?#26377;24小时,超过要加入集群就需要重建token

kubeadm token create

输出类似值  5didvk.d09sbcov8ph2amjw

#查看token
kubeadm token list

 

3.再获取hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed ‘s/^.* //’
输出类似值 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

4.然后通过命令加入
kubeadm join –token <token> <master-ip>:<master-port> –discovery-token-ca-cert-hash sha256:<hash>
记得替换相应值

———–知识扩展3
如果需要在集群master以外的机器上控制集群
需要在其它机器上进行以下配置

1 复制admin.conf到所需的机器
scp [email protected]<master ip>:/etc/kubernetes/admin.conf .
2  用以下命令调用
kubectl –kubeconfig ./admin.conf get nodes

———–知识扩展4
代理 apiserver 到本地
如果要从集群个连接apiserver 可以使用kubectl proxy

1
scp [email protected]<master ip>:/etc/kubernetes/admin.conf .
2
kubectl –kubeconfig ./admin.conf proxy

3.在本地访问 http://localhost:8001/api/v1

———–知识扩展5
要撤消kubeadm所做的事情,首先应该排空节点并确保节点在关闭之前是空的。

1 ,运行:

kubectl drain <node name> –delete-local-data –force –ignore-daemonsets
kubectl delete node <node name>

2  节点全部移除后
kubeadm reset

3. 清除iptables
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

ipvsadm -C

4. 如果想重新开始,那?#21019;永?#21363;可

kubeadm init or kubeadm join

———–知识扩展6
如何维护集群
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/

一看必会系列:k8s 练习10 日志收集 elk fluentd实战

No Comments Kubernetes

从官方下载对应yaml
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

-rw-r–r–  1 root root      382 Apr  3 23:28 es-service.yaml
-rw-r–r–  1 root root     2900 Apr  4 04:15 es-statefulset.yaml
-rw-r–r–  1 root root    16124 Apr  3 23:28 fluentd-es-configmap.yaml
-rw-r–r–  1 root root     2717 Apr  4 06:19 fluentd-es-ds.yaml
-rw-r–r–  1 root root     1166 Apr  4 05:46 kibana-deployment.yaml
-rw-r–r–  1 root root      272 Apr  4 05:27 kibana-ingress.yaml  #这个在后面
-rw-r–r–  1 root root      354 Apr  3 23:28 kibana-service.yaml

特别注意,一定要按照yaml里的文件来下载image不然会有各种错

先执行这个
kubectl create -f fluentd-es-configmap.yaml
configmap/fluentd-es-config-v0.2.0 created

再执行
[[email protected] elk]# kubectl create -f fluentd-es-ds.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v2.5.0 created

[[email protected] elk]# kubectl get pod -n kube-system |grep flu
fluentd-es-v2.5.0-hjzw8                 1/1     Running   0          19s
fluentd-es-v2.5.0-zmlm2                 1/1     Running   0          19s
[[email protected] elk]#

再启动elasticsearch
[[email protected] elk]# kubectl create -f es-statefulset.yaml
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
[[email protected] elk]# kubectl create -f es-service.yaml
service/elasticsearch-logging created
[[email protected] elk]#

[[email protected] elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0                 1/1     Running   0          11s
elasticsearch-logging-1                 1/1     Running   0          8s
[[email protected] elk]#

再高动 kibana/kibana
kubectl create -f kibana-deployment.yaml
kubectl get pod -n kube-system
kubectl create -f kibana-service.yaml

验证
[[email protected] elk]# kubectl get pod,svc -n kube-system |grep kiba
pod/kibana-logging-65f5b98cf6-2p8cj         1/1     Running   0          46s

service/kibana-logging          ClusterIP   10.100.152.68   <none>        5601/TCP        21s
[[email protected] elk]#

查看集群信息
[[email protected] elk]# kubectl cluster-info
Elasticsearch is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

因为只开了 容器端口,在外部机器上是无法访问的。有以?#24405;?#31181;方法来访问

1.开proxy  在master上开
#这玩意是前台执行的,退出后就没了。–address 是master的Ip ?#23548;?#19978;哪台上面都行
kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’

如需后台运?#23567;?#20351;用。 nohup  kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’ *
在master上查看端口是否开启
netstat -ntlp |grep 80
tcp        0      0 192.168.10.68:2380      0.0.0.0:*               LISTEN      8897/etcd          
tcp        0      0 192.168.10.68:8085      0.0.0.0:*               LISTEN      16718/kubectl  

直接浏览器验证
http://192.168.10.68:8085/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/home/tutorial_directory/sampleData?_g=()
出页面即正常

进去kibana后操作出图
1.点击左边management
2. 建立index Create index pattern
3. 输入* 查看具体的日?#20037;?
4.  例如 logstash-2019.03.25 ,改成logstash-* 下一步到完成
4.1 一定要把那个 星星点一下, 设为index默认以logstash-*
5. discover 就可以看到日志了

验证结果,以下为正常,没有https 要注意
curl http://192.168.10.68:8085/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/
{
  "name" : "bc30CKf",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "C3oV5BnMTByxYltuuYjTjg",
  "version" : {
    "number" : "6.7.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "8453f77",
    "build_date" : "2019-03-21T15:32:29.844721Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

方法二:

[[email protected] elk]# kubectl get ingress -n kube-system -o wide
NAME             HOSTS           ADDRESS   PORTS   AGE
kibana-logging   elk.ccie.wang             80      6m42s

可以是可以。但是会报 404 这个需要再查下问题在哪

创建ingress
配置文件如下 kibana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana-logging-ingress
  namespace: kube-system
spec:
  rules:
  – host: elk.ccie.wang
    http:
      paths:
      – path: /
        backend:
          serviceName: kibana-logging
          servicePort: 5601

kubectl create -f kibana-ingress.yaml

 

验证文件信息
[[email protected] elk]# kubectl get -f fluentd-es-ds.yaml
NAME                        SECRETS   AGE
serviceaccount/fluentd-es   1         85s

NAME                                               AGE
clusterrole.rbac.authorization.k8s.io/fluentd-es   85s

NAME                                                      AGE
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es   85s

NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/fluentd-es-v2.5.0   2         2         2       2            2           <none>          85s
[[email protected] elk]#

----------报错

[[email protected] elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0                 0/1     ErrImagePull   0          71s
[[email protected] elk]#
拉境像报错

      containers:
      #将下面改成
      #- image: gcr.io/fluentd-elasticsearch/elasticsearch:v6.6.1
      – image: reg.ccie.wang/library/elk/elasticsearch:6.7.0

 

 

 

—————-知识扩展
1. fluentd
怎么使用这个镜像

docker run -d -p 24224:24224 -p 24224:24224/udp -v /data:/fluentd/log fluent/fluentd:v1.3-debian-1

默认的配置如下
监听端口 24224
存储标记为 docker.** 到 /fluentd/log/docker.*.log (and symlink docker.log)
存储其它日?#38236;?/fluentd/log/data.*.log (and symlink data.log)

?#27604;?#20063;能自定议?#38382;?/p>

docker run -ti –rm -v /path/to/dir:/fluentd/etc fluentd -c /fluentd/etc/配置文件 -v

第一个-v ?#20197;?path/to/dir到容器里的/fluentd/etc

-c 前的是容器名 告诉 fluentd去哪找这个配置文件
第二个-v 传递详细的配置信息给 fluented

切换运行用户 foo

docker run -p 24224:24224 -u foo -v …

一看必会系列:k8s 练习9 ingress ssl https 多证书实战

No Comments Kubernetes

ingress nginx https ssl多证书
创建?#25509;?#35777;书
# openssl req -x509 -nodes -days 365 \
-newkey rsa:2048 -keyout xxx.yyy.key \
-out xxx.yyy.crt \
-subj “/CN=*.xxx.yyy/O=xxx.yyy”
方案1.每个证书对应一个 name? #官方推荐
[[email protected] ssl]# kubectl create secret tls tls.ccie.wang –key ccie.wang.key –cert ccie.wang.crt
[[email protected] ssl]# kubectl create secret tls tls.xxx.yyy –key xxx.yyy.key –cert xxx.yyy.crt

查看证书
[[email protected] ssl]# kubectl get secret
NAME????????????????? TYPE????????????????????????????????? DATA?? AGE
default-token-tkfmx?? kubernetes.io/service-account-token?? 3????? 30d
tls.ccie.wang???????? kubernetes.io/tls???????????????????? 2????? 78m
tls.xxx.yyy?????????? kubernetes.io/tls???????????????????? 2????? 12s
[[email protected] ssl]#
创建ingress https服务
[[email protected] ssl]# kubectl apply -f xxx.yyy.yaml
ingress.extensions/nginx-xxx-yyy-test created

查看ingress状态
[[email protected] ssl]# kubectl get ingress
NAME?????????????????? HOSTS????????????? ADDRESS?? PORTS???? AGE
ingress-nginx-test???? in2.ccie.wang??????????????? 80??????? 23h
nginx-ccie-wang-test?? in4ssl.ccie.wang???????????? 80, 443?? 37m #自动生成80、443端口
nginx-xxx-yyy-test???? in4ssl.xxx.yyy?????????????? 80, 443?? 9s
[[email protected] ssl]#
验证
[email protected]:/etc/nginx/conf.d# curl -s https://in4ssl.xxx.yyy -k |head -5
<html ng-app=”redis”>
<head>
<title>Guestbook</title>
<link rel=”stylesheet” href=”bootstrap.min.css”>
<script src=”angular.min.js”></script>
[email protected]:/etc/nginx/conf.d#
方案2.所有证书对应一个namE 测试不可用
#将两个域名证书放到一个secret里
# kubectl create secret generic tow-cert \
–from-file=ccie.wang.key? \
–from-file=ccie.wang.crt? \
–from-file=xxx.yyy.key? \
–from-file=xxx.yyy.crt -n default

查看Secret
[[email protected] ssl]# kubectl describe secret tow-cert
Name:???????? tow-cert
Namespace:??? default
Labels:?????? <none>
Annotations:? <none>

Type:? Opaque

Data
#包含两个证书
ccie.wang.crt:? 3622 bytes
ccie.wang.key:? 1732 bytes
xxx.yyy.crt:??? 1143 bytes
xxx.yyy.key:??? 1704 bytes
?#23548;?#39564;证发现 证书信息是不对的。而且证书加载的是default-fake-certificate.pem
可能需要confitmap进行?#20197;兀?#20294;这样比单独配置证书更麻烦
正常应该是 tow-cert
ssl_certificate???????? /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key???? /etc/ingress-controller/ssl/default-fake-certificate.pem;

————–报错
[email protected]:/etc/nginx/conf.d# curl https://!$
curl https://in4ssl.xxx.yyy
curl: (60) SSL certificate problem: self signed certificate
More details here: https://172.16.0.168/api/v4/projects?search=xxxx -k

wget ‘https://172.16.0.168/api/v4/projects?search=xxxx -k’ –no-check-certificate

一看必会系列:k8s 练习7 部署ingress-nginx

No Comments Kubernetes

安装ingress 服务
官方地址
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

直接运行
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

修改 mandatory.yaml 这个文件才?#23567;?#20462;改with-rbac.yaml没用,一共修改2处
vim mandatory.yaml

188 apiVersion: apps/v1
189 kind: Deployment
190 metadata:
191   name: nginx-ingress-controller
192   namespace: ingress-nginx
193   labels:
194     app.kubernetes.io/name: ingress-nginx
195     app.kubernetes.io/part-of: ingress-nginx
196 spec:
       #改成2,同时运行两个
197   replicas: 2

210     spec:
       #增加hostNetwork: true,目的是开放host主机?#31995;?#23545;应端口,
       #具体端口在配置service时候进行定义
211       hostNetwork: true
212       serviceAccountName: nginx-ingress-serviceaccount
213       containers:
214         – name: nginx-ingress-controller
215           image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0

 

运行
[[email protected] ingress]# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

#查看状态
[[email protected] ingress]# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
nginx-ingress-controller-7966d94d6c-8prth   1/1     Running   0          19m   192.168.10.71   k8s-node2   <none>           <none>
nginx-ingress-controller-7966d94d6c-w5btd   1/1     Running   0          19m   192.168.10.69   k8s-node1   <none>           <none>
[[email protected] ingress]#

 

需要访问的服务
[[email protected] ingress]# kubectl get svc |grep fr
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
frontend-svc     NodePort    10.100.151.156   <none>        80:30011/TCP   6d1h
[[email protected] ingress]#

 

vim frontend-svc.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx-test
spec:
  rules:
  – host: in1.ccie.wang
    http:
      paths:
      – path: /
        backend:
          serviceName: frontend-svc
          #这里?#24039;?#38754;服务的端口用kubectl get pods 进行查看
          #意思是将请求转发到 frontend-svc 的80端口,和nginx 的upstream 一样
          servicePort: 80

#查看生成的时否正常
[[email protected] ingress]# kubectl get ingress
NAME                 HOSTS           ADDRESS   PORTS   AGE
ingress-nginx-test   in1.ccie.wang             80      5m55s

 

查看node上对应的 80 端口是否已生成
[[email protected] ~]# netstat -ntlp |grep :80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      10319/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      10319/nginx: master
[[email protected] ~]#
[[email protected] ~]# netstat -ntlp |grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      12085/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      12085/nginx: master
[[email protected] ~]#

后在 master上测试,正常
[[email protected] ingress]# curl -s in2.ccie.wang |head -3
<html ng-app="redis">
  <head>
    <title>Guestbook</title>

 

 

 

 

   
——————-报错   
   
   
[[email protected] ingress]# kubectl create -f frontend-svc.yaml
The Ingress "ingress-myServiceA" is invalid: metadata.name: Invalid value: "ingress-myServiceA":
a DNS-1123 subdomain must consist of lower case alphanumeric characters, ‘-‘ or ‘.’,
and must start and end with an alphanumeric character
(e.g. ‘example.com’, regex used for validation is ‘[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*’)

解决
metadata.name 不能?#20889;?#20889;。改成
vim frontend-svc.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
#name不能?#20889;?#20889;。改成
  name: ingress-nginx-test
spec:
  rules:
  – host: in1.ccie.wang
    http:
      paths:
      – path: /web
        backend:
          serviceName: frontend-svc
          servicePort: 80
~                         
---------报错2
测试但不能访问
[[email protected] ~]# curl in1.ccie.wang/wen
curl: (7) Failed connect to in1.ccie.wang:80; Connection refused
[[email protected] ~]# curl in1.ccie.wang/web
curl: (7) Failed connect to in1.ccie.wang:80; Connection refused
[[email protected] ~]#

进入系统查看
[[email protected] ingress]# kubectl exec -it nginx-ingress-controller-7966d94d6c-8prth -n ingress-nginx
/bin/bash
查看配置,正常
cat /etc/nginx/nginx.config

ping测试,现在解析错了。解析到k8s-master上了,应该解析到 node上面
[[email protected] ingress]# ping in1.ccie.wang
PING in1.ccie.wang (192.168.10.68) 56(84) bytes of data.
64 bytes from k8s-master (192.168.10.68): icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from k8s-master (192.168.10.68): icmp_seq=2 ttl=64 time=0.033 ms
^C

修改解析后在 master上测试,正常
[[email protected] ingress]# curl -s in2.ccie.wang |head -3
<html ng-app="redis">
  <head>
    <title>Guestbook</title>

————细节?#30001;?/p>

https://github.com/kubernetes/ingress-nginx/blob/master/README.md
https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/

ingress-nginx文件位于deploy目录下,各文件的作用:

configmap.yaml:提供configmap可以在线更行nginx的配置
default-backend.yaml:提供一个缺省的后台错误页面 404
namespace.yaml:创建一个独立的命名?#21344;?ingress-nginx
rbac.yaml:创建对应的role rolebinding 用于rbac
tcp-services-configmap.yaml:修改L4负载均衡配置的configmap
udp-services-configmap.yaml:修改L4负载均衡配置的configmap
with-rbac.yaml:有应用rbac的nginx-ingress-controller组件

官方安装方式
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

 

https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

 

Via the host network?
In a setup where there is no external load balancer available but using NodePorts is not an option,
one can configure ingress-nginx Pods to use the network of the host they run on instead of
a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller
can bind ports 80 and 443 directly to Kubernetes nodes’ network interfaces,
without the extra network translation imposed by NodePort Services.

his can be achieved by enabling the hostNetwork option in the Pods’ spec.

template:
  spec:
    hostNetwork: true
   
   
   
其中:

rules中的host必须为域名,不能为IP,表示Ingress-controller的Pod所在主机域名,也就是Ingress-controller的IP对应的域名。
paths中的path则表示?#25104;?#30340;路径。如?#25104;?表示若访问myk8s.com,则会将请求转发至Kibana的service,端口为5601。

一看必会系列:k8s 练习5 k8s调度给指定node

No Comments Kubernetes

 

查看当前node2信息
[[email protected] elk]#  kubectl describe node k8s-node2 |grep -C 5 Lab
Name:               k8s-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-node2
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:27:fd:0f:47:76"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
[[email protected] elk]#

1.来打个标 

命令如下 kubectl label nodes node名 随便写=随便写1

[[email protected] elk]# kubectl label nodes k8s-node2 mylabel=100
node/k8s-node2 labeled
[[email protected] elk]#  kubectl describe node k8s-node2 |grep -C 10 Lab
Name:               k8s-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-node2
                    mylabel=100    #〈--就是这个东西
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:27:fd:0f:47:76"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.10.71
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
[[email protected] elk]#

2.难后搞个容器试试
vim busybox-pod5.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox-testxx4
  labels:
    name: busybox-pod-lb
spec:
  containers:
  – name: busybox-xxx4
    image: reg.ccie.wang/library/busybox:1.30.1
    command:
    – sleep
    – "3600"
  #使用下面命令进行node选择
  nodeSelector:
    mylabel: "100"

创建
kubectl apply -f busybox-pod5.yaml

验证

[[email protected] busybox]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
busybox-testxx4            1/1     Running   0          54s    10.244.2.88   k8s-node2   <none>           <none>
可以看到node2 上面去了

查看pod信息
[[email protected] busybox]# kubectl describe pod busybox-testxx4
Name:               busybox-testxx4
Labels:             name=busybox-pod-lb
IP:                 10.244.2.88
Containers:
  busybox-xxx4:
    Image:         reg.ccie.wang/library/busybox:1.30.1
    Command:
      sleep
      3600
      #新标标
Node-Selectors:  mylabel=100

Events:
  Type    Reason     Age    From                Message
  —-    ——     —-   —-                ——-
  Normal  Scheduled  4m17s  default-scheduler   Successfully assigned default/busybox-testxx4 to k8s-node2
  Normal  Pulled     4m16s  kubelet, k8s-node2  Container image "reg.ccie.wang/library/busybox:1.30.1" already present on machine
  Normal  Created    4m16s  kubelet, k8s-node2  Created container
  Normal  Started    4m16s  kubelet, k8s-node2  Started container
[[email protected] busybox]#

 

 

 

————报错  没起来
[[email protected] busybox]# kubectl get pod -o wide
NAME                       READY   STATUS        RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
busybox-testxx1            0/1     Pending       0          38m    <none>         k8s-node2   <none>           <none>

[[email protected] busybox]# kubectl describe pod busybox-testxx1
Name:               busybox-testxx1
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node2/
Labels:             name=busybox-pod-lb
Annotations:        <none>
Status:             Pending
IP:                
Containers:
  busybox-xxx1:
    Image:        busybox
Node-Selectors:  mylabel=100

Events:
  Type     Reason     Age                   From                Message
  —-     ——     —-                  —-                ——-
  Normal   Scheduled  3m32s                 default-scheduler   Successfully assigned default/busybox-testxx5 to k8s-node2
  Normal   Pulled     113s (x5 over 3m31s)  kubelet, k8s-node2  Container image "reg.ccie.wang/library/busybox:1.30.1" already present on machine
  Normal   Created    113s (x5 over 3m30s)  kubelet, k8s-node2  Created container
  Normal   Started    113s (x5 over 3m30s)  kubelet, k8s-node2  Started container
  Warning  BackOff    99s (x10 over 3m28s)  kubelet, k8s-node2  Back-off restarting failed container

对于像ubuntu这样的系统级docker ,用k8s集群启动管理后,会自动关闭,
解决方法就是 让其一直在运行,所以在yaml文件中增加command命令即可

原因是这里配错了
apiVersion: v1
kind: Pod
metadata:
  name: busybox-testxx1
  labels:
    name: busybox-pod-lb
spec:
  containers:
  – name: busybox-xxx1
    image: busybox
#需要增?#29992;?#20196;3条
    command:
    – sleep
    – "3600"

  nodeSelector:
    mylabel: "100"
   
   
或者只要有进程运行就?#23567;?
     command: [ "/bin/bash", "-c", "–" ]
     args: [ "while true; do sleep 30; done;" ]

30选5玩法