30选5玩法|福彩30选5开奖结果321|
 

一看必会系列:k8s 练习9 ingress ssl https 多证书实战

No Comments Kubernetes

ingress nginx https ssl多证书
创建私有证书
# openssl req -x509 -nodes -days 365 \
-newkey rsa:2048 -keyout xxx.yyy.key \
-out xxx.yyy.crt \
-subj “/CN=*.xxx.yyy/O=xxx.yyy”
方案1.每个证书对应一个 name? #官方推荐
[[email protected] ssl]# kubectl create secret tls tls.ccie.wang –key ccie.wang.key –cert ccie.wang.crt
[[email protected] ssl]# kubectl create secret tls tls.xxx.yyy –key xxx.yyy.key –cert xxx.yyy.crt

查看证书
[[email protected] ssl]# kubectl get secret
NAME????????????????? TYPE????????????????????????????????? DATA?? AGE
default-token-tkfmx?? kubernetes.io/service-account-token?? 3????? 30d
tls.ccie.wang???????? kubernetes.io/tls???????????????????? 2????? 78m
tls.xxx.yyy?????????? kubernetes.io/tls???????????????????? 2????? 12s
[[email protected] ssl]#
创建ingress https服务
[[email protected] ssl]# kubectl apply -f xxx.yyy.yaml
ingress.extensions/nginx-xxx-yyy-test created

查看ingress状态
[[email protected] ssl]# kubectl get ingress
NAME?????????????????? HOSTS????????????? ADDRESS?? PORTS???? AGE
ingress-nginx-test???? in2.ccie.wang??????????????? 80??????? 23h
nginx-ccie-wang-test?? in4ssl.ccie.wang???????????? 80, 443?? 37m #自动生成80、443端口
nginx-xxx-yyy-test???? in4ssl.xxx.yyy?????????????? 80, 443?? 9s
[[email protected] ssl]#
验证
[email protected]:/etc/nginx/conf.d# curl -s https://in4ssl.xxx.yyy -k |head -5
<html ng-app=”redis”>
<head>
<title>Guestbook</title>
<link rel=”stylesheet” href=”bootstrap.min.css”>
<script src=”angular.min.js”></script>
[email protected]:/etc/nginx/conf.d#
方案2.所有证书对应一个namE 测试不可用
#将两个域名证书放到一个secret里
# kubectl create secret generic tow-cert \
–from-file=ccie.wang.key? \
–from-file=ccie.wang.crt? \
–from-file=xxx.yyy.key? \
–from-file=xxx.yyy.crt -n default

查看Secret
[[email protected] ssl]# kubectl describe secret tow-cert
Name:???????? tow-cert
Namespace:??? default
Labels:?????? <none>
Annotations:? <none>

Type:? Opaque

------中间广告---------

Data
#包含两个证书
ccie.wang.crt:? 3622 bytes
ccie.wang.key:? 1732 bytes
xxx.yyy.crt:??? 1143 bytes
xxx.yyy.key:??? 1704 bytes
实际验证发现 证书信息是不对的。而且证书加载的是default-fake-certificate.pem
可能需要confitmap进行挂载,但这样比单独配置证书更麻烦
正常应该是 tow-cert
ssl_certificate???????? /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key???? /etc/ingress-controller/ssl/default-fake-certificate.pem;

————–报错
[email protected]:/etc/nginx/conf.d# curl https://!$
curl https://in4ssl.xxx.yyy
curl: (60) SSL certificate problem: self signed certificate
More details here: https://172.16.0.168/api/v4/projects?search=xxxx -k

wget ‘https://172.16.0.168/api/v4/projects?search=xxxx -k’ –no-check-certificate

一看必会系列:k8s 练习8 ingress https ssl

No Comments 网络技术

 

k8s ingress-nginx ssl

生成证书
https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md

生成证书和key
$ openssl req -x509 -sha256 -nodes -days 365 \
         -newkey rsa:2048 \
         -keyout tls.key \
         -out tls.crt \
         -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
…………….+++
…………….+++
writing new private key to ‘tls.key’
—–

命令格式
kubectl create secret tls ${CERT_NAME} –key ${KEY_FILE} –cert ${CERT_FILE}

制作secret,后面在容器里用 tls名字进行挂载
$ kubectl create secret tls tls-secret –key tls.key –cert tls.crt
secret "tls-secret" created  #引号里为名字

使用真?#26723;膆ttps证书
[[email protected] ssl]# ll
total 16
-rw-r–r– 1 root root 3622 Mar  4 07:17 ccie.wang.crt    -证书
-rw-r–r– 1 root root 1732 Jan 28 16:41 ccie.wang.key    -key
-rw-r–r– 1 root root  308 Apr  2 04:57 ccie.wang.yaml
-rw-r–r– 1 root root 2992 Mar 31 19:38 multi-tls.yaml
[[email protected] ssl]# kubectl create secret tls tls.ccie.wang –key ccie.wang.key –cert ccie.wang.crt
secret/tls.ccie.wang created  #生成密文成功

配置ingress service

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ccie-wang-test
spec:
  tls:
    – hosts:
        #写成其它host也能?#23567;?#19981;知为啥
      – in4ssl.ccie1.wang
      secretName: tls.ccie.wang
  rules:
      #这个host一定要对,这个对应nginx的 server_name
    – host: in4ssl.ccie.wang
      http:
        paths:
        – path: /
          backend:
             #service必须先存在,端口也要存在
            serviceName: frontend-svc
             #启动后会自动开启443端口,不用定义
            servicePort: 80

验证:
进入ingress的pod查看配置
kubectl exec nginx-ingress-controller-7966d94d6c-8prth \
-n ingress-nginx — cat /etc/nginx/nginx.conf > nginx.conf

直接请求https页面,出现如下即正常
[[email protected] ssl]# curl -s https://in4ssl.ccie.wang |head -5
<html ng-app="redis">
  <head>
    <title>Guestbook</title>
    <link rel="stylesheet" href="bootstrap.min.css">
    <script src="angular.min.js"></script>
[[email protected] ssl]#

直接请求http页面,发现被重定向即为正常
[[email protected] ssl]# curl in4ssl.ccie.wang
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.15.9</center>
</body>
</html>

 

————–知?#29420;?#23637;

https://kubernetes.github.io/ingress-nginx/user-guide/tls/

TLS Secrets?
Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.

You can generate a self-signed certificate and private key with:

生成证书
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}"
Then create the secret in the cluster via:

创建密文
kubectl create secret tls ${CERT_NAME} –key ${KEY_FILE} –cert ${CERT_FILE}
The resulting secret will be of type kubernetes.io/tls.

Default SSL Certificate?
NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works without out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required.

For this reason the Ingress controller provides the flag –default-ssl-certificate. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate.

For instance, if you have a TLS secret foo-tls in the default namespace, add –default-ssl-certificate=default/foo-tls in the nginx-controller deployment.

SSL Passthrough?
The –enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects.

Warning

This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client.

If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend.

Note

Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.

HTTP Strict Transport Security?
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.

HSTS is enabled by default.

To disable this behavior use hsts: "false" in the configuration ConfigMap.

Server-side HTTPS enforcement through redirect?
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.

This can be disabled globally using ssl-redirect: "false" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.

Tip

When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource.

Automated Certificate Management with Kube-Lego?
Tip

Kube-Lego has reached end-of-life and is being replaced by cert-manager.

Kube-Lego automatically requests missing or expired certificates from Let’s Encrypt by monitoring ingress resources and their referenced secrets.

To enable this for an ingress resource you have to add an annotation:

kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true"
To setup Kube-Lego you can take a look at this full example. The first version to fully support Kube-Lego is Nginx Ingress controller 0.8.

Default TLS Version and Ciphers?
To provide the most secure baseline configuration possible,

nginx-ingress defaults to using TLS 1.2 only and a secure set of TLS ciphers.

Legacy TLS?
The default configuration, though secure, does not support some older browsers and operating systems.

For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with nginx-ingress’s default configuration.

To change this default behavior, use a ConfigMap.

A sample ConfigMap fragment to allow these older clients to connect could look something like the following:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
data:
  ssl-ciphers: "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA"
  ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2"

一看必会系列:k8s 练习7 部署ingress-nginx

No Comments Kubernetes

安装ingress 服务
官方地址
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

直接运行
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

修改 mandatory.yaml 这个文件才?#23567;?#20462;改with-rbac.yaml没用,一共修改2处
vim mandatory.yaml

188 apiVersion: apps/v1
189 kind: Deployment
190 metadata:
191   name: nginx-ingress-controller
192   namespace: ingress-nginx
193   labels:
194     app.kubernetes.io/name: ingress-nginx
195     app.kubernetes.io/part-of: ingress-nginx
196 spec:
       #改成2,同时运行两个
197   replicas: 2

210     spec:
       #增加hostNetwork: true,目的是开放host主机?#31995;?#23545;应端口,
       #具体端口在配置service时候进行定义
211       hostNetwork: true
212       serviceAccountName: nginx-ingress-serviceaccount
213       containers:
214         – name: nginx-ingress-controller
215           image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0

 

运行
[[email protected] ingress]# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

#查看状态
[[email protected] ingress]# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
nginx-ingress-controller-7966d94d6c-8prth   1/1     Running   0          19m   192.168.10.71   k8s-node2   <none>           <none>
nginx-ingress-controller-7966d94d6c-w5btd   1/1     Running   0          19m   192.168.10.69   k8s-node1   <none>           <none>
[[email protected] ingress]#

 

需要访问的服务
[[email protected] ingress]# kubectl get svc |grep fr
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
frontend-svc     NodePort    10.100.151.156   <none>        80:30011/TCP   6d1h
[[email protected] ingress]#

 

vim frontend-svc.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx-test
spec:
  rules:
  – host: in1.ccie.wang
    http:
      paths:
      – path: /
        backend:
          serviceName: frontend-svc
          #这里?#24039;?#38754;服务的端口用kubectl get pods 进行查看
          #意?#38469;?#23558;请求转发到 frontend-svc 的80端口,和nginx 的upstream 一样
          servicePort: 80

#查看生成的时否正常
[[email protected] ingress]# kubectl get ingress
NAME                 HOSTS           ADDRESS   PORTS   AGE
ingress-nginx-test   in1.ccie.wang             80      5m55s

 

查看node上对应的 80 端口是否已生成
[[email protected] ~]# netstat -ntlp |grep :80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      10319/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      10319/nginx: master
[[email protected] ~]#
[[email protected] ~]# netstat -ntlp |grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      12085/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      12085/nginx: master
[[email protected] ~]#

后在 master上测试,正常
[[email protected] ingress]# curl -s in2.ccie.wang |head -3
<html ng-app="redis">
  <head>
    <title>Guestbook</title>

 

 

 

 

   
——————-报错   
   
   
[[email protected] ingress]# kubectl create -f frontend-svc.yaml
The Ingress "ingress-myServiceA" is invalid: metadata.name: Invalid value: "ingress-myServiceA":
a DNS-1123 subdomain must consist of lower case alphanumeric characters, ‘-‘ or ‘.’,
and must start and end with an alphanumeric character
(e.g. ‘example.com’, regex used for validation is ‘[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*’)

解决
metadata.name 不能有大写。改成
vim frontend-svc.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
#name不能有大写。改成
  name: ingress-nginx-test
spec:
  rules:
  – host: in1.ccie.wang
    http:
      paths:
      – path: /web
        backend:
          serviceName: frontend-svc
          servicePort: 80
~                         
---------报错2
测试但不能访问
[[email protected] ~]# curl in1.ccie.wang/wen
curl: (7) Failed connect to in1.ccie.wang:80; Connection refused
[[email protected] ~]# curl in1.ccie.wang/web
curl: (7) Failed connect to in1.ccie.wang:80; Connection refused
[[email protected] ~]#

进入系统查看
[[email protected] ingress]# kubectl exec -it nginx-ingress-controller-7966d94d6c-8prth -n ingress-nginx
/bin/bash
查看配置,正常
cat /etc/nginx/nginx.config

ping测试,现在解析错了。解析到k8s-master上了,应该解析到 node上面
[[email protected] ingress]# ping in1.ccie.wang
PING in1.ccie.wang (192.168.10.68) 56(84) bytes of data.
64 bytes from k8s-master (192.168.10.68): icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from k8s-master (192.168.10.68): icmp_seq=2 ttl=64 time=0.033 ms
^C

修改解析后在 master上测试,正常
[[email protected] ingress]# curl -s in2.ccie.wang |head -3
<html ng-app="redis">
  <head>
    <title>Guestbook</title>

————细节?#30001;?/p>

https://github.com/kubernetes/ingress-nginx/blob/master/README.md
https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/

ingress-nginx文件位于deploy目录下,各文件的作用:

configmap.yaml:提供configmap可以在线更行nginx的配置
default-backend.yaml:提供一个缺省的后台错误页面 404
namespace.yaml:创建一个独立的命名空间 ingress-nginx
rbac.yaml:创建对应的role rolebinding 用于rbac
tcp-services-configmap.yaml:修改L4负载均衡配置的configmap
udp-services-configmap.yaml:修改L4负载均衡配置的configmap
with-rbac.yaml:有应用rbac的nginx-ingress-controller组件

官方安装方式
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

 

https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

 

Via the host network?
In a setup where there is no external load balancer available but using NodePorts is not an option,
one can configure ingress-nginx Pods to use the network of the host they run on instead of
a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller
can bind ports 80 and 443 directly to Kubernetes nodes’ network interfaces,
without the extra network translation imposed by NodePort Services.

his can be achieved by enabling the hostNetwork option in the Pods’ spec.

template:
  spec:
    hostNetwork: true
   
   
   
其中:

rules中的host必须为域名,不能为IP,表示Ingress-controller的Pod所在主机域名,也就是Ingress-controller的IP对应的域名。
paths中的path则表示?#25104;?#30340;路径。如?#25104;?表示若访问myk8s.com,则会将请求转发至Kibana的service,端口为5601。

一看必会系列:k8s 练习5 k8s调度给指定node

No Comments Kubernetes

 

查看当前node2信息
[[email protected] elk]#  kubectl describe node k8s-node2 |grep -C 5 Lab
Name:               k8s-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-node2
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:27:fd:0f:47:76"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
[[email protected] elk]#

1.?#21019;?#20010;标 

命令如下 kubectl label nodes node名 随便写=随便写1

[[email protected] elk]# kubectl label nodes k8s-node2 mylabel=100
node/k8s-node2 labeled
[[email protected] elk]#  kubectl describe node k8s-node2 |grep -C 10 Lab
Name:               k8s-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-node2
                    mylabel=100    #〈--就是这个东西
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:27:fd:0f:47:76"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.10.71
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
[[email protected] elk]#

2.难后搞个容器试试
vim busybox-pod5.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox-testxx4
  labels:
    name: busybox-pod-lb
spec:
  containers:
  – name: busybox-xxx4
    image: reg.ccie.wang/library/busybox:1.30.1
    command:
    – sleep
    – "3600"
  #使用下面命令进行node选择
  nodeSelector:
    mylabel: "100"

创建
kubectl apply -f busybox-pod5.yaml

验证

[[email protected] busybox]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
busybox-testxx4            1/1     Running   0          54s    10.244.2.88   k8s-node2   <none>           <none>
可以看到node2 上面去了

查看pod信息
[[email protected] busybox]# kubectl describe pod busybox-testxx4
Name:               busybox-testxx4
Labels:             name=busybox-pod-lb
IP:                 10.244.2.88
Containers:
  busybox-xxx4:
    Image:         reg.ccie.wang/library/busybox:1.30.1
    Command:
      sleep
      3600
      #新标标
Node-Selectors:  mylabel=100

Events:
  Type    Reason     Age    From                Message
  —-    ——     —-   —-                ——-
  Normal  Scheduled  4m17s  default-scheduler   Successfully assigned default/busybox-testxx4 to k8s-node2
  Normal  Pulled     4m16s  kubelet, k8s-node2  Container image "reg.ccie.wang/library/busybox:1.30.1" already present on machine
  Normal  Created    4m16s  kubelet, k8s-node2  Created container
  Normal  Started    4m16s  kubelet, k8s-node2  Started container
[[email protected] busybox]#

 

 

 

————报错  没起来
[[email protected] busybox]# kubectl get pod -o wide
NAME                       READY   STATUS        RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
busybox-testxx1            0/1     Pending       0          38m    <none>         k8s-node2   <none>           <none>

[[email protected] busybox]# kubectl describe pod busybox-testxx1
Name:               busybox-testxx1
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node2/
Labels:             name=busybox-pod-lb
Annotations:        <none>
Status:             Pending
IP:                
Containers:
  busybox-xxx1:
    Image:        busybox
Node-Selectors:  mylabel=100

Events:
  Type     Reason     Age                   From                Message
  —-     ——     —-                  —-                ——-
  Normal   Scheduled  3m32s                 default-scheduler   Successfully assigned default/busybox-testxx5 to k8s-node2
  Normal   Pulled     113s (x5 over 3m31s)  kubelet, k8s-node2  Container image "reg.ccie.wang/library/busybox:1.30.1" already present on machine
  Normal   Created    113s (x5 over 3m30s)  kubelet, k8s-node2  Created container
  Normal   Started    113s (x5 over 3m30s)  kubelet, k8s-node2  Started container
  Warning  BackOff    99s (x10 over 3m28s)  kubelet, k8s-node2  Back-off restarting failed container

对于像ubuntu这样的系统级docker ,用k8s集群启动管理后,会自动关闭,
解决方法就是 让其一直在运行,所以在yaml文件中增加command命令即可

原因是这里配错了
apiVersion: v1
kind: Pod
metadata:
  name: busybox-testxx1
  labels:
    name: busybox-pod-lb
spec:
  containers:
  – name: busybox-xxx1
    image: busybox
#需要增?#29992;?#20196;3条
    command:
    – sleep
    – "3600"

  nodeSelector:
    mylabel: "100"
   
   
或者只要有进程运行就?#23567;?
     command: [ "/bin/bash", "-c", "–" ]
     args: [ "while true; do sleep 30; done;" ]

一看必会系列:k8s 练习4 pod滚动升级

No Comments Kubernetes

Pod滚动升级

rolling-update 可用在 滚动升级,回滚,POD重启等操作

正式配置文件在最下面

[[email protected] redis_rolling]#  kubectl rolling-update redis-master-rc-v2 -f redis-master-rc-v4.yaml
Command "rolling-update" is deprecated, use "rollout" instead

第一阶段
Created redis-master-rc-v4  #开始创建 redis-master-rc-v4
第二阶段
Scaling up redis-master-rc-v4 from 0 to 4,  #将 redis-master-rc-v4 加到4个副本
scaling down redis-master-rc-v2 from 2 to 0  #将 redis-master-rc-v2 从2减到0
(keep 4 pods available, don’t exceed 5 pods)
第三阶段
Scaling redis-master-rc-v4 up to 3  # redis-master-rc-v4 加到3
Scaling redis-master-rc-v2 down to 1 #V2 减到1
Scaling redis-master-rc-v4 up to 4  #V4 加到4
Scaling redis-master-rc-v2 down to 0 #V2 减到0
第4阶段
Update succeeded.
Deleting old controller: redis-master-rc-v2 #更新成功删除掉的rc-v2
Renaming redis-master-rc-v4 to redis-master-rc-v2  #将V4改名为V2
replicationcontroller/redis-master-rc-v2 rolling updated #升级完成
完成

验证
[[email protected] ~]# kubectl get rc,pod
NAME                                       DESIRED   CURRENT   READY   AGE
replicationcontroller/redis-master-rc-v2   2         2         2       11m
#刚执行,第1-2阶段,两个RC同时存在
replicationcontroller/redis-master-rc-v4   3         3         3       50s

NAME                           READY   STATUS    RESTARTS   AGE
#刚执行,第1-2阶段,两个RC同时存在
pod/redis-master-rc-v3-69rjr   1/1     Running   0          12m
pod/redis-master-rc-v3-vhnbs   1/1     Running   0          12m
pod/redis-master-rc-v4-cp9h8   1/1     Running   0          50s
pod/redis-master-rc-v4-jfhcx   1/1     Running   0          50s
pod/redis-master-rc-v4-nqqkp   1/1     Running   0          50s

过?#38382;?#38388;之后 #第3-4阶段
[[email protected] ~]# kubectl get rc,pod
NAME                                       DESIRED   CURRENT   READY   AGE
#只剩一个了
replicationcontroller/redis-master-rc-v2   4         4         4       11m

NAME                           READY   STATUS      RESTARTS   AGE
只剩一个了
pod/redis-master-rc-v4-cp9h8   1/1     Running     0          14m
pod/redis-master-rc-v4-h5f6r   1/1     Running     0          13m
pod/redis-master-rc-v4-jfhcx   1/1     Running     0          14m
pod/redis-master-rc-v4-nqqkp   1/1     Running     0          14m
[[email protected] ~]# kubectl get rc,pod

使用命令进行 rolling-update 正常升级的结果是 原来的pod和rc会被替换掉,
新rc-v4会重命名为原来的rc-v2,保持服务正常

 

下面是直接用命令升级 kubectl rolling-update  现在的RC –image=要升级的镜像
[[email protected] ~]# kubectl rolling-update redis-master-rc  –image=kubeguide/redis-master:1.0
Command "rolling-update" is deprecated, use "rollout" instead
Found existing update in progress (redis-master-rc-v2), resuming.
Continuing update with existing controller redis-master-rc-v2.
Scaling up redis-master-rc-v2 from 1 to 1, scaling down redis-master-rc from 1 to 0 (keep 1 pods available, don’t exceed 2 pods)
Scaling redis-master-rc down to 0
Update succeeded. Deleting redis-master-rc
replicationcontroller/redis-master-rc-v2 rolling updated to "redis-master-rc-v2"

验证
[[email protected] yaml]# kubectl get rc,pod
NAME                                       DESIRED   CURRENT   READY   AGE

replicationcontroller/redis-master-rc-v2   1         1         1       19m 

NAME                           READY   STATUS    RESTARTS   AGE
pod/redis-master-rc-v2-2rc4x   1/1     Running   0          19m  #

[[email protected] yaml]# kubectl describe pod/redis-master-rc-v2-2rc4x
Name:               redis-master-rc-v2-2rc4x
Namespace:          default
Node:               k8s-node1/192.168.10.69
Labels:             deployment=4e423afb21f081b285503ab911a2c748
                    name=redis-master-lb
Containers:
  master:
    Image:          kubeguide/redis-master:1.0

Events:
  Type    Reason     Age   From                Message
  —-    ——     —-  —-                ——-
  Normal  Scheduled  28m   default-scheduler   Successfully assigned default/redis-master-rc-v2-2rc4x to k8s-node1
  Normal  Pulling    28m   kubelet, k8s-node1  pulling image "kubeguide/redis-master:1.0"
  Normal  Pulled     28m   kubelet, k8s-node1  Successfully pulled image "kubeguide/redis-master:1.0"
  Normal  Created    28m   kubelet, k8s-node1  Created container
  Normal  Started    28m   kubelet, k8s-node1  Started container
[[email protected] yaml]#

———–配置方式1------------
配置文件v2
redis-master-rc-v2.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc-v2  #升级的不能一样
  labels:
    names: redis-master-lb-2   #升级的不能一样
spec:
  replicas: 2
  selector:
    name: redis-master-lb-2 #升级的不能一样
  template:
    metadata:
      labels:
        name: redis-master-lb-2  #升级的不能一样
    spec:
     containers:
     – name: master
       image: kubeguide/redis-master:1.0  #升级的不能一样
       ports:
       – containerPort: 6379
      
配置v4
redis-master-rc-v4.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc-v4  #升级的不能和v2一样
  labels:
    names: redis-master-lb-4 #升级的不能和v2一样
spec:
  replicas: 4
  selector:
    name: redis-master-lb-4 #升级的不能和v2一样
  template:
    metadata:
      labels:
        name: redis-master-lb-4  #升级的不能和v2一样
    spec:
     containers:
     – name: master
       image: kubeguide/redis-master:latest  #升级的不能和v2一样
       ports:
       – containerPort: 6379

———–配置方式2------------
配置文件v2
redis-master-rc-v2.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc-v2  #升级的不能一样
  labels:
    names: redis-master-lb  
    ver: v1                 #建rc的时候这个就要存在,否则后面升级会报错
spec:
  replicas: 2
  selector:
    name: redis-master-lb
    ver: v1                #建rc的时候这个就要存在,否则后面升级会报错

  template:
    metadata:
      labels:
        name: redis-master-lb-2
        ver: v1            #建rc的时候这个就要存在,否则后面升级会报错

    spec:
     containers:
     – name: master
       image: kubeguide/redis-master:1.0  #升级的不能一样
       ports:
       – containerPort: 6379
      
配置v4
redis-master-rc-v4.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc-v4 
  labels:
    names: redis-master-lb-4
    ver: v2                #这个原来的版本就要存在,本?#25105;?#19981;一样,否则升级会报错
spec:
  replicas: 4
  selector:
    name: redis-master-lb
    ver: v2                #这个原来的版本就要存在,本?#25105;?#19981;一样,否则升级会报错

  template:
    metadata:
      labels:
        name: redis-master-lb
        ver: v2                #这个原来的版本就要存在,本?#25105;?#19981;一样,否则升级会报错
    spec:
     containers:
     – name: master
       image: kubeguide/redis-master:latest  #升级的不能和v2一样
       ports:
       – containerPort: 6379

 

————-升级有误 进行回滚 kubectl rolling-update RC名 –image=镜像 –rollback
[[email protected] redis_rolling]# kubectl rolling-update redis-master-rc-v2 –image=kubeguide/redis-master:1.0 –rollback
Command "rolling-update" is deprecated, use "rollout" instead
error: Don’t specify –filename or –image on rollback
See ‘kubectl rolling-update -h’ for help and examples.
[[email protected] redis_rolling]#

升级正常就不能用这种方式进行回滚

———–报错
[[email protected] yaml]# kubectl rolling-update redis-master-rc -f redis-master-rc-v2.yaml
Command "rolling-update" is deprecated, use "rollout" instead
error: redis-master-rc-v2.yaml contains a /v1, Kind=ReplicationController not a ReplicationController
问题和解决方法看 配置方式2------------

报错2
[[email protected] redis_rolling]# kubectl rolling-update redis-master-rc-v2 -f redis-master-rc-v3.yaml
Command "rolling-update" is deprecated, use "rollout" instead
error: redis-master-rc-v3.yaml must specify a matching key with non-equal value in Selector for redis-master-rc-v2
[[email protected] redis_rolling]# !v
说明
must specify a matching key with non-equal value in Selector for redis-master-rc-v2
这里报错说明是配置文件的问题,
 
问题和解决方法看 配置方式2------------

k8s pod模版

No Comments Kubernetes

pod模版

apiVersion: v1                  #必选,版本号,例如v1,版本号必须可以用 kubectl api-versions 查询到 .
kind: Pod                #必选,Pod
metadata:                #必选,元数据
  name: string                  #必选,Pod名称
  namespace: string             #必选,Pod所属的命名空间,默认为"default"
  labels:                 #自定义标签
    – name: string                #自定义标签名字
  annotations:                         #自定义注释列表
    – name: string
spec:                     #必选,Pod中容器的详细定义
  containers:                   #必选,Pod中容器列表
  – name: string                      #必选,容器名称,需符合RFC 1035规范
    image: string                     #必选,容器的镜像名称
    imagePullPolicy: [ Always|Never|IfNotPresent ]  #获取镜像的策略 Alawys表示下载镜像 IfnotPresent表示优先使用本地镜像,否则下载镜像,Nerver表示仅使用本地镜像
    command: [string]             #容器的启动命令列表,如不指定,使用打包时使用的启动命令
    args: [string]                   #容器的启动命令?#38382;?#21015;表
    workingDir: string                     #容器的工作目录
    volumeMounts:             #挂载到容器内部的存储卷配置
    – name: string              #引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名
      mountPath: string                 #存储卷在容器内mount的绝?#26376;?#24452;,应少于512字符
      readOnly: boolean                 #是否为只读模式
    ports:                #需要暴露的端口库号列表
    – name: string              #端口的名称
      containerPort: int                #容器需要监听的端口号
      hostPort: int                  #容器所在主机需要监听的端口号,默认与Container相同
      protocol: string                  #端口协议,支持TCP和UDP,默认TCP
    env:                    #容器运行前需设置的环境变量列表
    – name: string                  #环境变量名称
      value: string                 #环境变量的值
    resources:                        #?#35797;?#38480;制和请求的设置
      limits:                   #?#35797;?#38480;制的设置
        cpu: string                 #Cpu的限制,单位为core数,将用于docker run –cpu-shares?#38382;?
        memory: string                  #内存限制,单位可以为Mib/Gib,将用于docker run –memory?#38382;?
      requests:                       #?#35797;?#35831;求的设置
        cpu: string                 #Cpu请求,容器启动的初始可用数量
        memory: string                    #内存请求,容器启动的初始可用数量
    livenessProbe:                  #对Pod内各容器健康检查的设置,当探测无响应?#22797;?#21518;将自动重启该容器,检查方法有exec、httpGet和tcpSocket,对一个容器只需设置其中一种方法即可
      exec:               #对Pod容器内检查方式设置为exec方式
        command: [string]               #exec方式需要制定的命令或脚本
      httpGet:                #对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port
        path: string
        port: number
        host: string
        scheme: string
        HttpHeaders:
        – name: string
          value: string
      tcpSocket:      #对Pod内个容器健康检查方式设置为tcpSocket方式
         port: number
       initialDelaySeconds: 0       #容器启动完成后?#29366;?#25506;测的时间,单位为秒
       timeoutSeconds: 0        #对容器健康检查探测等待响应的超时时间,单位秒,默认1秒
       periodSeconds: 0         #对容器监控检查的定期探测时间设置,单位秒,默认10秒一次
       successThreshold: 0
       failureThreshold: 0
       securityContext:
         privileged: false
    restartPolicy: [Always | Never | OnFailure] #Pod的重启策略,Always表示一旦不管以何种方式终止运行,kubelet都将重启,OnFailure表示只有Pod以非0退出码退出才重启,Nerver表示不再重启该Pod
    nodeSelector: obeject       #设置NodeSelector表示将该Pod调度到包含这个label的node上,以key:value的格式指定
    imagePullSecrets:     #Pull镜像时使用的secret名称,以key:secretkey格式指定
    – name: string
    hostNetwork: false          #是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络
    volumes:            #在该pod上定义共享存储卷列表
    – name: string         #共享存储卷名称 (volumes类型有很多种)
      emptyDir: {}          #类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值
      hostPath: string          #类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录
        path: string              #Pod所在宿主机的目录,将被用于同期中mount的目录
      secret:           #类型为secret的存储卷,挂载集群与定义的secre对象到容器内部
        scretname: string 
        items:    
        – key: string
          path: string
      configMap:                  #类型为configMap的存储卷,挂载预定义的configMap对象到容器内部
        name: string
        items:
        – key: string
          path: string

一看必会系列:k8s 练习3 pod 的扩容和缩减

No Comments Kubernetes

pod 的扩容和缩减

[[email protected] yaml]# kubectl get rc
NAME              DESIRED   CURRENT   READY   AGE
frontend-rc       3         3         3       18h
redis-master-rc   1         1         1       35h
redis-slave-rc    2         2         2       35h
[[email protected] yaml]# kubectl scale rc frontend-rc –replicas=4
replicationcontroller/frontend-rc scaled
[[email protected] yaml]# kubectl get rc
NAME              DESIRED   CURRENT   READY   AGE
frontend-rc       4         4         4       18h
redis-master-rc   1         1         1       35h
redis-slave-rc    2         2         2       35h
[[email protected] yaml]# kubectl get rc,pod
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       4         4         4       18h
replicationcontroller/redis-master-rc   1         1         1       35h
replicationcontroller/redis-slave-rc    2         2         2       35h

NAME                        READY   STATUS    RESTARTS   AGE
pod/frontend-rc-2h62f       1/1     Running   0          18h
pod/frontend-rc-5dwk2       1/1     Running   0          18h
pod/frontend-rc-dmxp8       1/1     Running   0          18h
pod/frontend-rc-flg9m       1/1     Running   0          9s
pod/redis-master-rc-jrrgx   1/1     Running   0          35h
pod/redis-slave-rc-f9svq    1/1     Running   0          23h
pod/redis-slave-rc-p6kbq    1/1     Running   0          35h
[[email protected] yaml]#

 

[[email protected] yaml]# kubectl scale rc frontend-rc –replicas=2
replicationcontroller/frontend-rc scaled
[[email protected] yaml]#
[[email protected] yaml]#
[[email protected] yaml]# kubectl get rc,pod
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       2         2         2       18h
replicationcontroller/redis-master-rc   1         1         1       35h
replicationcontroller/redis-slave-rc    2         2         2       35h

NAME                        READY   STATUS    RESTARTS   AGE
pod/frontend-rc-5dwk2       1/1     Running   0          18h
pod/frontend-rc-dmxp8       1/1     Running   0          18h
pod/redis-master-rc-jrrgx   1/1     Running   0          35h
pod/redis-slave-rc-f9svq    1/1     Running   0          23h
pod/redis-slave-rc-p6kbq    1/1     Running   0          35h
[[email protected] yaml]#
[[email protected] yaml]#

[[email protected] hpa]# kubectl autoscale rc hpa-apache-rc –min=1 –max=10 –cpu-percent=50
horizontalpodautoscaler.autoscaling/hpa-apache-rc autoscaled
[[email protected] hpa]# kubectl get rc,hpa
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       2         2         2       21h
replicationcontroller/hpa-apache-rc     1         1         1       113m
replicationcontroller/redis-master-rc   1         1         1       37h
replicationcontroller/redis-slave-rc    2         2         2       37h

NAME                                                REFERENCE                             TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-apache-rc   ReplicationController/hpa-apache-rc   <unknown>/50%   1         10        0          5s
[[email protected] hpa]#

进bosybox进行测试
[[email protected] hpa]# kubectl exec -it busybox-pod sh

/ # while true; do wget -q -O-  http://hpa-apache-svc > /dev/null;done

[[email protected] ~]# kubectl get rc,hpa
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       2         2         2       21h
replicationcontroller/hpa-apache-rc     3         3         3       128m
replicationcontroller/redis-master-rc   1         1         1       38h
replicationcontroller/redis-slave-rc    2         2         2       37h

NAME                                                REFERENCE                             TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-apache-rc   ReplicationController/hpa-apache-rc   122%/50%   1         10        3          15m
[[email protected] ~]#

稳定在 3个POD CPU?#20540;?#21040;44%
[[email protected] ~]# kubectl get rc,hpa
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       2         2         2       21h
replicationcontroller/hpa-apache-rc     3         3         3       148m
replicationcontroller/redis-master-rc   1         1         1       38h
replicationcontroller/redis-slave-rc    2         2         2       38h

NAME                                                REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-apache-rc   ReplicationController/hpa-apache-rc   44%/50%   1         10        3          35m
[[email protected] ~]#

退出测试后过?#38382;?#38388;,pod恢复成一个
[[email protected] hpa]# kubectl get rc,hpa,svc
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/hpa-apache-rc     1         1         1       159m

NAME                                                REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-apache-rc   ReplicationController/hpa-apache-rc   0%/50%    1         10        1          45m

NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/hpa-apache-svc   ClusterIP   10.100.27.38     <none>        80/TCP         157m

使用yaml的方式进行autoscale

[[email protected] hpa]# kubectl create -f hpa-apache-autoscale.yaml
horizontalpodautoscaler.autoscaling/hpa-apache-autoscale created
[[email protected] hpa]# kubectl get hpa
NAME                   REFERENCE                                        TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
hpa-apache-autoscale   ReplicationController/hpa-apache-autoscale-pod   <unknown>/50%   1         10        0          9s
[[email protected] hpa]#
刚启动时  出现<unknown> 不过不要紧,过?#38382;?#38388;就好了。

[[email protected] ~]# kubectl get hpa    #现在已读到CPU信息
NAME                   REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-apache-autoscale   ReplicationController/hpa-apache-rc   0%/50%    1         10        1          53s

继续测试
[[email protected] hpa]# kubectl exec -it busybox-pod sh
#这里地址用的service的名字   http://service ?#27604;?#20063;可以用IP加端口的方式
/ # while true;do wget -q -O-  http://hpa-apache-svc > /dev/null;done

CPU上来了且生成到了3个POD来解决问题
[[email protected] ~]# kubectl get hpa
NAME                   REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-apache-autoscale   ReplicationController/hpa-apache-rc   44%/50%   1         10        3          4m48s

 

删除autoscale
[[email protected] hpa]# kubectl get hpa
NAME            REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-apache-rc   ReplicationController/hpa-apache-rc   0%/50%    1         10        1          51m

[[email protected] hpa]# kubectl delete hpa hpa-apache-rc
horizontalpodautoscaler.autoscaling "hpa-apache-rc" deleted

[[email protected] hpa]# kubectl get hpa
No resources found.
[[email protected] hpa]#

配置文件如下
[[email protected] hpa]# tree .
.
?#25193;ぉ?busybox-pod.yaml
?#25193;ぉ?hpa-apache-autoscale.yaml
?#25193;ぉ?hpa-apache-rc.yaml
└── hpa-apache-svc.yaml

?#25193;ぉ?busybox-pod.yaml
apiVersion: v1                                                                             
kind: Pod
metadata:                                                                                  
  name: busybox-pod                                                                  
spec:                                                                                      
  containers:
    – name: busybox
      image: busybox
      command: [ "sleep" , "3600"]
     
?#25193;ぉ?hpa-apache-autoscale.yaml

apiVersion: autoscaling/v1                                                                     
kind: HorizontalPodAutoscaler
metadata:                                                                                  
  name: hpa-apache-autoscale                                                                      
spec:                                                                                      
  scaleTargetRef:
    apiVersion: v1
    kind: ReplicationController
    name: hpa-apache-rc                                                 
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50
 
?#25193;ぉ?hpa-apache-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: hpa-apache-rc
spec:
  replicas: 1
  template:
    metadata:
      name: hpa-apache-lb
      labels:
        name: hpa-apache-lb
    spec:
     containers:
     – name: hpa-apache-ctn
       image:  reg.ccie.wang/test/ubuntu:apache2.4.29
       resources:
         requests:
           cpu: 200m
       ports:
       – containerPort: 80
      
└── hpa-apache-svc.yaml
apiVersion: v1                                                                             
kind: Service
metadata:                                                                                  
  name: hpa-apache-svc                                                                       
spec:                                                                                      
  ports:
    – port: 80

Kubernetes(k8s) EmptyDir、HostPath、ConfigMap和Secret等几种存储类型介绍

No Comments Kubernetes

一个运行中的容器,缺省情况下,对文件系统的写入,?#38469;?#21457;生在其分层文件系统的可写层的,一旦容器运行结束,所有写入都会被丢弃。因此需要对持久化支持。

Kubernetes 中通过 Volume 的方式提供对存储的支持。下面对一些常见的存储概念进行一点简要的说明。

EmptyDir

顾名思义,EmptyDir是一个空目录,他的生命周期和所属的 Pod 是完全一致的,可能读者会奇怪,那还要他做什么?EmptyDir的用处是,可以在同一 Pod 内的不同容器之间共享工作过程中产生的文件。

缺省情况下,EmptyDir 是使用主机磁盘进行存储的,也可以设置emptyDir.medium 字段的值为Memory,来提高运行速度,但是这种设置,对该卷的占用会消耗容器的内存份额。

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  – image: gcr.io/google_containers/test-webserver
    name: test-container
    volumeMounts:
    – mountPath: /cache
      name: cache-volume
  volumes:
  – name: cache-volume
    emptyDir: {}
HostPath

这种会把宿主机?#31995;?#25351;定卷加载到容器之中,?#27604;唬?#22914;果 Pod 发生跨主机的重建,其内容就难保证了。

这种卷一般和DaemonSet搭配使用,用来操作主机文件,例如进行日志采集的 FLK 中的 FluentD 就采用这种方式,加载主机的容器日志目录,达到收集本主机所有日志的目的。

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  – image: gcr.io/google_containers/test-webserver
    name: test-container
    volumeMounts:
    – mountPath: /test-pd
      name: test-volume
  volumes:
  – name: test-volume
    hostPath:
      # directory location on host
      path: /data
NFS/GlusterFS/CephFS/AWS/GCE 等等

作为一个容器集群,支持网络存储自然是重中之重了,Kubernetes 支持为数众多的云提供商和网络存储方案。

各种支持的方式不尽相同,例如 GlusterFS 需要创建 Endpoint,Ceph/NFS 之流就没这么麻烦了。

各种个性配?#27599;?#31227;步参考文档。

ConfigMap 和 Secret

镜像使用的过程中,经常需要利用配置文件、启动脚?#38236;?#26041;式来影响容器的运行方式,如果仅有少量配置,我们可以使用环境变量的方式来进?#20449;?#32622;。然而对于一些较为复杂的配置,例如 Apache 之类,就很难用这种方式进行控制了。另外一些敏感信息暴?#23545;?YAML 中也是不合?#23454;摹?/p>

ConfigMap 和 Secret 除了使用文件方式进行应用之外,还有其他的应用方式;这里仅就文件方式做一点说明。

例如下面的 ConfigMap,将一个存储在 ConfigMap 中的配置目?#25216;?#36733;到卷?#23567;?/p>

apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod
spec:
  containers:
    – name: test-container
      image: gcr.io/google_containers/busybox
      command: [ "/bin/sh", "-c", "ls /etc/config/" ]
      volumeMounts:
      – name: config-volume
        mountPath: /etc/config
  volumes:
    – name: config-volume
      configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
        name: special-config
  restartPolicy: Never
注意,这里的 ConfigMap 会?#25104;?#20026;一个目录,ConfigMap 的 Key 就是文件名,每个 Value 就是文件内容,比如下面命令用一个目录创建一个 ConfigMap:

kubectl create configmap \
    game-config \
    –from-file=docs/user-guide/configmap/kubectl
创建一个 Secret:

kubectl create secret generic \
    db-user-pass –from-file=./username.txt \
    –from-file=./password.txt
使用 Volume 加载 Secret:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: myns
spec:
  containers:
    – name: mypod
      image: redis
      volumeMounts:
        – name: foo
          mountPath: /etc/foo
          readOnly: true
  volumes:
    – name: foo
      secret:
        secretName: mysecret
可以看到 Secret 和 ConfigMap 的创建和使用是很相似的。在 RBAC 中,Secret 和 ConfigMap 可以进行?#30452;?#36171;权,以此限定操作人员的可见、可控权限。

PV & PVC

PersistentVolume 和 PersistentVolumeClaim 提供了对存储支持的抽象,也提供了基础设施和应用之间的分界,管理员创建一系列的 PV 提供存储,然后为应用提供 PVC,应用程序仅需要加载一个 PVC,就可以进行访问。

而 1.5 之后又提供了 PV 的动态供应。可以不经 PV 步骤直接创建 PVC。

原文出处:fleeto -> http://blog.fleeto.us/content/kubernetes-zhong-de-ji-chong-cun-chu

Kubernetes部分Volume类型介绍及yaml示例–emptyDir

No Comments Kubernetes

Kubernetes部分Volume类型介绍及yaml示例–emptyDir(本地数据卷)

说明
EmptyDir类型的volume创建于pod被调度到某个宿主机?#31995;?#26102;候,而同一个pod内的容器都能?#21015;碋mptyDir中的同一个文件。一旦这个pod离开了这个宿主机,EmptyDir中的数据就会被永久删除。所以目前EmptyDir类型的volume主要用作临时空间,比如Web服务器写日志或者tmp文件需要的临时目录。
实战使用共享卷的标准单容器POD
#创建yaml文件
cat >> emptyDir.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
    labels:
        name: test-emptypath
        role: master
    name: test-emptypath
    namespace: test
spec:
    containers:
        – name: test-emptypath
            image: nginx:1.7.9
            volumeMounts:
             – name: log-storage
                 mountPath: /tmp/
    volumes:
    – name: log-storage
        emptyDir: {}
#启动emptyDir.yaml
kubectl create -f ./emptyDir.yaml
#查看Pod运行状态
kubectl get po -n test
NAME                         READY     STATUS    RESTARTS   AGE
test-emptypath               1/1       Running   0          3h
##说明:当 Pod 被分配给节点时,首先创建 emptyDir 卷,并且只要该 Pod
##在该节点上运行,该卷就会存在。正如卷的名字所述,它最初是空的。
实战使用共享卷的标准多容器POD、
#创建yaml文件
cat  >> emptyDir2.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
    name: datagrand
    namespace: test
spec:
    containers:
    – name: test1
        image: nginx:1.7.9
        volumeMounts:
        – name: log-storage
            mountPath: /usr/share/nginx/html
    – name: test2
        image: centos
        volumeMounts:
        – name: log-storage
            mountPath: /html
        command: ["/bin/sh","-c"]
        args:
            – while true;do
                    data >> /html/index.html;
                    sleep 1;
                done
volumes:
    – name: log-storage
        emptyDir: {}
##说明:在这个例子中,我们定义了一个名为HTML的卷。它的类型是emptyDir,
##这意味着当一个POD被分配到一个节点时,卷先被创建,并只要Pod在节点上
##运行时,这个卷仍存在。正如名字所说,它最初是空的。第一容器运行nginx的
##服务器并将共享卷挂载到目录/ usr /share/ nginx /html。第二容器使用centos
##的镜像,并将共享卷挂载到目录/HTML。每一秒,第二容器添加当前日期和时
##间到index.html文件中,它位于共享卷。当用户发出一个HTTP请求到POD,
##nginx的服务器读取该文件并将其传递给响应请求的用户。
#运行yaml
kubectl create -f ./emptyDir2.yaml
#查看Pod运行状态
kubectl get po -n test
NAME                         READY     STATUS    RESTARTS   AGE
datagrand                    2/2       Running   0          22m
#进入容器test1
kubectl exec -it datagrand -c test1 /bin/bash -n test
[email protected]:/# cd /usr/share/nginx/html
[email protected]:/usr/share/nginx/html# ls
index.html
##添加内容
[email protected]:/usr/share/nginx/html# echo "this is a test" >> index.html
#进入容器test2
kubectl exec -it datagrand -c test2 /bin/bash -n test
[[email protected] /]# cd html
[[email protected] html]# ls
index.html
[[email protected] html]# cat index.html
this is a test
##emptyDir卷是两个容器(test1和test2)共享的
参考文档
https://www.kubernetes.org.cn/2767.html

30选5玩法