server can’t find NXDOMAIN

No Comments IT小技巧

[[email protected] ~]# nslookup web.fat.cc
Server:        10.1.2.4
Address:    10.1.2.4#53

** server can’t find web.fat.cc: NXDOMAIN

可能原因:

1.域名写错了
2.服务没有重启
3.权限不对  如下:

[[email protected] ~]# ll /var/named/

-rw-r—–. 1 root  root   537 Sep  3 06:54 fat.cc.zone

修改后又变成 root了
chown root:named /var/named/fat.cc.zone
改一下,然后重启named服务即可

systemctl restart named

server can’t find NXDOMAIN

No Comments IT小技巧

[[email protected] ~]# nslookup web.fat.cc
Server:        10.1.2.4
Address:    10.1.2.4#53

** server can’t find web.fat.cc: NXDOMAIN

1.域名写错了
2.服务没有重启
3.权限不对  如下:

[[email protected] ~]# ll /var/named/

-rw-r—–. 1 root  root   537 Sep  3 06:54 fat.cc.zone

修改后又变成 root了
chown root:named /var/named/fat.cc.zone
改一下,然后重启named服务即可

一看必会系列:k8s 练习10 日志收集 elk fluentd实战

No Comments Kubernetes

从官方下载对应yaml
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

-rw-r–r–  1 root root      382 Apr  3 23:28 es-service.yaml
-rw-r–r–  1 root root     2900 Apr  4 04:15 es-statefulset.yaml
-rw-r–r–  1 root root    16124 Apr  3 23:28 fluentd-es-configmap.yaml
-rw-r–r–  1 root root     2717 Apr  4 06:19 fluentd-es-ds.yaml
-rw-r–r–  1 root root     1166 Apr  4 05:46 kibana-deployment.yaml
-rw-r–r–  1 root root      272 Apr  4 05:27 kibana-ingress.yaml  #这个在后面
-rw-r–r–  1 root root      354 Apr  3 23:28 kibana-service.yaml

特别注意,一定要按照yaml里的文件来下载image不然会有各种错

先执行这个
kubectl create -f fluentd-es-configmap.yaml
configmap/fluentd-es-config-v0.2.0 created

再执行
[[email protected] elk]# kubectl create -f fluentd-es-ds.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v2.5.0 created

[[email protected] elk]# kubectl get pod -n kube-system |grep flu
fluentd-es-v2.5.0-hjzw8                 1/1     Running   0          19s
fluentd-es-v2.5.0-zmlm2                 1/1     Running   0          19s
[[email protected] elk]#

再启动elasticsearch
[[email protected] elk]# kubectl create -f es-statefulset.yaml
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
[[email protected] elk]# kubectl create -f es-service.yaml
service/elasticsearch-logging created
[[email protected] elk]#

[[email protected] elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0                 1/1     Running   0          11s
elasticsearch-logging-1                 1/1     Running   0          8s
[[email protected] elk]#

再高动 kibana/kibana
kubectl create -f kibana-deployment.yaml
kubectl get pod -n kube-system
kubectl create -f kibana-service.yaml

验证
[[email protected] elk]# kubectl get pod,svc -n kube-system |grep kiba
pod/kibana-logging-65f5b98cf6-2p8cj         1/1     Running   0          46s

service/kibana-logging          ClusterIP   10.100.152.68   <none>        5601/TCP        21s
[[email protected] elk]#

查看集群信息
[[email protected] elk]# kubectl cluster-info
Elasticsearch is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

因为只开了 容器端口,在外部机器上是无法访问的。有以下几种方法来访问

1.开proxy  在master上开
#这玩意是前台执行的,退出后就没了。–address 是master的Ip 实际上哪台上面都行
kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’

如需后台运?#23567;?#20351;用。 nohup  kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’ *
在master上查看端口是否开启
netstat -ntlp |grep 80
tcp        0      0 192.168.10.68:2380      0.0.0.0:*               LISTEN      8897/etcd          
tcp        0      0 192.168.10.68:8085      0.0.0.0:*               LISTEN      16718/kubectl  

直接浏览器验证
http://192.168.10.68:8085/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/home/tutorial_directory/sampleData?_g=()
出页面即正常

进去kibana后操作出图
1.点击左边management
2. 建立index Create index pattern
3. 输入* 查看具体的日志名
4.  例如 logstash-2019.03.25 ,改成logstash-* 下一步到完成
4.1 一定要把那个 星星点一下, 设为index默认以logstash-*
5. discover 就可以看到日志了

验证结果,以下为正常,没有https 要注意
curl http://192.168.10.68:8085/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/
{
  "name" : "bc30CKf",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "C3oV5BnMTByxYltuuYjTjg",
  "version" : {
    "number" : "6.7.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "8453f77",
    "build_date" : "2019-03-21T15:32:29.844721Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

方法二:

[[email protected] elk]# kubectl get ingress -n kube-system -o wide
NAME             HOSTS           ADDRESS   PORTS   AGE
kibana-logging   elk.ccie.wang             80      6m42s

可以是可以。但是会报 404 这个需要再查下问题在哪

创建ingress
配置文件如下 kibana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana-logging-ingress
  namespace: kube-system
spec:
  rules:
  – host: elk.ccie.wang
    http:
      paths:
      – path: /
        backend:
          serviceName: kibana-logging
          servicePort: 5601

kubectl create -f kibana-ingress.yaml

 

验证文件信息
[[email protected] elk]# kubectl get -f fluentd-es-ds.yaml
NAME                        SECRETS   AGE
serviceaccount/fluentd-es   1         85s

NAME                                               AGE
clusterrole.rbac.authorization.k8s.io/fluentd-es   85s

NAME                                                      AGE
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es   85s

NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/fluentd-es-v2.5.0   2         2         2       2            2           <none>          85s
[[email protected] elk]#

----------报错

[[email protected] elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0                 0/1     ErrImagePull   0          71s
[[email protected] elk]#
拉境像报错

      containers:
      #将下面改成
      #- image: gcr.io/fluentd-elasticsearch/elasticsearch:v6.6.1
      – image: reg.ccie.wang/library/elk/elasticsearch:6.7.0

 

 

 

—————-知识扩展
1. fluentd
怎么使用这个镜像

docker run -d -p 24224:24224 -p 24224:24224/udp -v /data:/fluentd/log fluent/fluentd:v1.3-debian-1

默认的配置如下
监听端口 24224
存储标记为 docker.** 到 /fluentd/log/docker.*.log (and symlink docker.log)
存储其它日志到 /fluentd/log/data.*.log (and symlink data.log)

当然也能自定议?#38382;?/p>

docker run -ti –rm -v /path/to/dir:/fluentd/etc fluentd -c /fluentd/etc/配置文件 -v

第一个-v 挂载/path/to/dir到容器里的/fluentd/etc

-c 前的是容器名 告诉 fluentd去哪找这个配置文件
第二个-v 传递详细的配置信息给 fluented

切换运行?#27809;?foo

docker run -p 24224:24224 -u foo -v …

一看必会系列:k8s 练习9 ingress ssl https 多证书实战

No Comments Kubernetes

ingress nginx https ssl多证书
创建私有证书
# openssl req -x509 -nodes -days 365 \
-newkey rsa:2048 -keyout xxx.yyy.key \
-out xxx.yyy.crt \
-subj “/CN=*.xxx.yyy/O=xxx.yyy”
方案1.每个证书对应一个 name? #官方推荐
[[email protected] ssl]# kubectl create secret tls tls.ccie.wang –key ccie.wang.key –cert ccie.wang.crt
[[email protected] ssl]# kubectl create secret tls tls.xxx.yyy –key xxx.yyy.key –cert xxx.yyy.crt

查看证书
[[email protected] ssl]# kubectl get secret
NAME????????????????? TYPE????????????????????????????????? DATA?? AGE
default-token-tkfmx?? kubernetes.io/service-account-token?? 3????? 30d
tls.ccie.wang???????? kubernetes.io/tls???????????????????? 2????? 78m
tls.xxx.yyy?????????? kubernetes.io/tls???????????????????? 2????? 12s
[[email protected] ssl]#
创建ingress https服务
[[email protected] ssl]# kubectl apply -f xxx.yyy.yaml
ingress.extensions/nginx-xxx-yyy-test created

查看ingress状态
[[email protected] ssl]# kubectl get ingress
NAME?????????????????? HOSTS????????????? ADDRESS?? PORTS???? AGE
ingress-nginx-test???? in2.ccie.wang??????????????? 80??????? 23h
nginx-ccie-wang-test?? in4ssl.ccie.wang???????????? 80, 443?? 37m #自动生成80、443端口
nginx-xxx-yyy-test???? in4ssl.xxx.yyy?????????????? 80, 443?? 9s
[[email protected] ssl]#
验证
[email protected]:/etc/nginx/conf.d# curl -s https://in4ssl.xxx.yyy -k |head -5
<html ng-app=”redis”>
<head>
<title>Guestbook</title>
<link rel=”stylesheet” href=”bootstrap.min.css”>
<script src=”angular.min.js”></script>
[email protected]:/etc/nginx/conf.d#
方案2.所有证书对应一个namE 测试不可用
#将两个域名证书放到一个secret里
# kubectl create secret generic tow-cert \
–from-file=ccie.wang.key? \
–from-file=ccie.wang.crt? \
–from-file=xxx.yyy.key? \
–from-file=xxx.yyy.crt -n default

查看Secret
[[email protected] ssl]# kubectl describe secret tow-cert
Name:???????? tow-cert
Namespace:??? default
Labels:?????? <none>
Annotations:? <none>

Type:? Opaque

Data
#包含两个证书
ccie.wang.crt:? 3622 bytes
ccie.wang.key:? 1732 bytes
xxx.yyy.crt:??? 1143 bytes
xxx.yyy.key:??? 1704 bytes
实际验证发现 证书信息是不对的。而且证书加载的是default-fake-certificate.pem
可能需要confitmap进行挂载,但这样比单独配置证书更麻烦
正常应该是 tow-cert
ssl_certificate???????? /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key???? /etc/ingress-controller/ssl/default-fake-certificate.pem;

————–报错
[email protected]:/etc/nginx/conf.d# curl https://!$
curl https://in4ssl.xxx.yyy
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

使用curl会报证书错,因为这个是私有证书,用下面方式可以解决

curl需要?#30001;?k,wget需要?#30001;?#8211;no-check-certificate:
curl https://172.16.0.168/api/v4/projects?search=xxxx -k

wget ‘https://172.16.0.168/api/v4/projects?search=xxxx -k’ –no-check-certificate

一看必会系列:k8s 练习8 ingress https ssl

No Comments 网络技术

 

k8s ingress-nginx ssl

生成证书
https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md

生成证书和key
$ openssl req -x509 -sha256 -nodes -days 365 \
         -newkey rsa:2048 \
         -keyout tls.key \
         -out tls.crt \
         -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
…………….+++
…………….+++
writing new private key to ‘tls.key’
—–

命令格式
kubectl create secret tls ${CERT_NAME} –key ${KEY_FILE} –cert ${CERT_FILE}

制作secret,后面在容器里用 tls名字进行挂载
$ kubectl create secret tls tls-secret –key tls.key –cert tls.crt
secret "tls-secret" created  #引号里为名字

使用真?#26723;膆ttps证书
[[email protected] ssl]# ll
total 16
-rw-r–r– 1 root root 3622 Mar  4 07:17 ccie.wang.crt    -证书
-rw-r–r– 1 root root 1732 Jan 28 16:41 ccie.wang.key    -key
-rw-r–r– 1 root root  308 Apr  2 04:57 ccie.wang.yaml
-rw-r–r– 1 root root 2992 Mar 31 19:38 multi-tls.yaml
[[email protected] ssl]# kubectl create secret tls tls.ccie.wang –key ccie.wang.key –cert ccie.wang.crt
secret/tls.ccie.wang created  #生成密文成功

配置ingress service

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ccie-wang-test
spec:
  tls:
    – hosts:
        #写成其它host也能?#23567;?#19981;知为啥
      – in4ssl.ccie1.wang
      secretName: tls.ccie.wang
  rules:
      #这个host一定要对,这个对应nginx的 server_name
    – host: in4ssl.ccie.wang
      http:
        paths:
        – path: /
          backend:
             #service必须先存在,端口也要存在
            serviceName: frontend-svc
             #启动后会自动开启443端口,不用定义
            servicePort: 80

验证:
进入ingress的pod查看配置
kubectl exec nginx-ingress-controller-7966d94d6c-8prth \
-n ingress-nginx — cat /etc/nginx/nginx.conf > nginx.conf

直接请求https页面,出现如下即正常
[[email protected] ssl]# curl -s https://in4ssl.ccie.wang |head -5
<html ng-app="redis">
  <head>
    <title>Guestbook</title>
    <link rel="stylesheet" href="bootstrap.min.css">
    <script src="angular.min.js"></script>
[[email protected] ssl]#

直接请求http页面,发现被重定向即为正常
[[email protected] ssl]# curl in4ssl.ccie.wang
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.15.9</center>
</body>
</html>

 

————–知识扩展

https://kubernetes.github.io/ingress-nginx/user-guide/tls/

TLS Secrets?
Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.

You can generate a self-signed certificate and private key with:

生成证书
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}"
Then create the secret in the cluster via:

创建密文
kubectl create secret tls ${CERT_NAME} –key ${KEY_FILE} –cert ${CERT_FILE}
The resulting secret will be of type kubernetes.io/tls.

Default SSL Certificate?
NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works without out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required.

For this reason the Ingress controller provides the flag –default-ssl-certificate. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate.

For instance, if you have a TLS secret foo-tls in the default namespace, add –default-ssl-certificate=default/foo-tls in the nginx-controller deployment.

SSL Passthrough?
The –enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects.

Warning

This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client.

If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend.

Note

Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.

HTTP Strict Transport Security?
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.

HSTS is enabled by default.

To disable this behavior use hsts: "false" in the configuration ConfigMap.

Server-side HTTPS enforcement through redirect?
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.

This can be disabled globally using ssl-redirect: "false" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.

Tip

When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource.

Automated Certificate Management with Kube-Lego?
Tip

Kube-Lego has reached end-of-life and is being replaced by cert-manager.

Kube-Lego automatically requests missing or expired certificates from Let’s Encrypt by monitoring ingress resources and their referenced secrets.

To enable this for an ingress resource you have to add an annotation:

kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true"
To setup Kube-Lego you can take a look at this full example. The first version to fully support Kube-Lego is Nginx Ingress controller 0.8.

Default TLS Version and Ciphers?
To provide the most secure baseline configuration possible,

nginx-ingress defaults to using TLS 1.2 only and a secure set of TLS ciphers.

Legacy TLS?
The default configuration, though secure, does not support some older browsers and operating systems.

For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with nginx-ingress’s default configuration.

To change this default behavior, use a ConfigMap.

A sample ConfigMap fragment to allow these older clients to connect could look something like the following:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
data:
  ssl-ciphers: "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA"
  ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2"

一看必会系列:k8s 练习7 部署ingress-nginx

No Comments Kubernetes

安装ingress 服务
官方地址
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

直接运行
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

修改 mandatory.yaml 这个文件才?#23567;?#20462;改with-rbac.yaml没用,一共修改2处
vim mandatory.yaml

188 apiVersion: apps/v1
189 kind: Deployment
190 metadata:
191   name: nginx-ingress-controller
192   namespace: ingress-nginx
193   labels:
194     app.kubernetes.io/name: ingress-nginx
195     app.kubernetes.io/part-of: ingress-nginx
196 spec:
       #改成2,同时运行两个
197   replicas: 2

210     spec:
       #增加hostNetwork: true,目的是开放host主机?#31995;?#23545;应端口,
       #具体端口在配置service时候进行定义
211       hostNetwork: true
212       serviceAccountName: nginx-ingress-serviceaccount
213       containers:
214         – name: nginx-ingress-controller
215           image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0

 

运行
[[email protected] ingress]# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

#查看状态
[[email protected] ingress]# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
nginx-ingress-controller-7966d94d6c-8prth   1/1     Running   0          19m   192.168.10.71   k8s-node2   <none>           <none>
nginx-ingress-controller-7966d94d6c-w5btd   1/1     Running   0          19m   192.168.10.69   k8s-node1   <none>           <none>
[[email protected] ingress]#

 

需要访问的服务
[[email protected] ingress]# kubectl get svc |grep fr
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
frontend-svc     NodePort    10.100.151.156   <none>        80:30011/TCP   6d1h
[[email protected] ingress]#

 

vim frontend-svc.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx-test
spec:
  rules:
  – host: in1.ccie.wang
    http:
      paths:
      – path: /
        backend:
          serviceName: frontend-svc
          #这里?#24039;?#38754;服务的端口用kubectl get pods 进行查看
          #意思是将请求转发到 frontend-svc 的80端口,和nginx 的upstream 一样
          servicePort: 80

#查看生成的时否正常
[[email protected] ingress]# kubectl get ingress
NAME                 HOSTS           ADDRESS   PORTS   AGE
ingress-nginx-test   in1.ccie.wang             80      5m55s

 

查看node上对应的 80 端口是否已生成
[[email protected] ~]# netstat -ntlp |grep :80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      10319/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      10319/nginx: master
[[email protected] ~]#
[[email protected] ~]# netstat -ntlp |grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      12085/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      12085/nginx: master
[[email protected] ~]#

后在 master上测试,正常
[[email protected] ingress]# curl -s in2.ccie.wang |head -3
<html ng-app="redis">
  <head>
    <title>Guestbook</title>

 

 

 

 

   
——————-报错   
   
   
[[email protected] ingress]# kubectl create -f frontend-svc.yaml
The Ingress "ingress-myServiceA" is invalid: metadata.name: Invalid value: "ingress-myServiceA":
a DNS-1123 subdomain must consist of lower case alphanumeric characters, ‘-‘ or ‘.’,
and must start and end with an alphanumeric character
(e.g. ‘example.com’, regex used for validation is ‘[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*’)

解决
metadata.name 不能?#20889;?#20889;。改成
vim frontend-svc.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
#name不能?#20889;?#20889;。改成
  name: ingress-nginx-test
spec:
  rules:
  – host: in1.ccie.wang
    http:
      paths:
      – path: /web
        backend:
          serviceName: frontend-svc
          servicePort: 80
~                         
---------报错2
测试但不能访问
[[email protected] ~]# curl in1.ccie.wang/wen
curl: (7) Failed connect to in1.ccie.wang:80; Connection refused
[[email protected] ~]# curl in1.ccie.wang/web
curl: (7) Failed connect to in1.ccie.wang:80; Connection refused
[[email protected] ~]#

进入系统查看
[[email protected] ingress]# kubectl exec -it nginx-ingress-controller-7966d94d6c-8prth -n ingress-nginx
/bin/bash
查看配置,正常
cat /etc/nginx/nginx.config

ping测试,现在解析错了。解析到k8s-master上了,应该解析到 node上面
[[email protected] ingress]# ping in1.ccie.wang
PING in1.ccie.wang (192.168.10.68) 56(84) bytes of data.
64 bytes from k8s-master (192.168.10.68): icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from k8s-master (192.168.10.68): icmp_seq=2 ttl=64 time=0.033 ms
^C

修改解析后在 master上测试,正常
[[email protected] ingress]# curl -s in2.ccie.wang |head -3
<html ng-app="redis">
  <head>
    <title>Guestbook</title>

————细节?#30001;?/p>

https://github.com/kubernetes/ingress-nginx/blob/master/README.md
https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/

ingress-nginx文件位于deploy目录下,各文件的作用:

configmap.yaml:提供configmap可以在线更行nginx的配置
default-backend.yaml:提供一个缺省的后台错误页面 404
namespace.yaml:创建一个独立的命名空间 ingress-nginx
rbac.yaml:创建对应的role rolebinding 用于rbac
tcp-services-configmap.yaml:修改L4负载均衡配置的configmap
udp-services-configmap.yaml:修改L4负载均衡配置的configmap
with-rbac.yaml:有应用rbac的nginx-ingress-controller组件

官方安装方式
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

 

https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

 

Via the host network?
In a setup where there is no external load balancer available but using NodePorts is not an option,
one can configure ingress-nginx Pods to use the network of the host they run on instead of
a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller
can bind ports 80 and 443 directly to Kubernetes nodes’ network interfaces,
without the extra network translation imposed by NodePort Services.

his can be achieved by enabling the hostNetwork option in the Pods’ spec.

template:
  spec:
    hostNetwork: true
   
   
   
其中:

rules中的host必须为域名,不能为IP,表示Ingress-controller的Pod所在主机域名,也就是Ingress-controller的IP对应的域名。
paths中的path则表示?#25104;?#30340;路径。如?#25104;?表示若访问myk8s.com,则会将请求转发至Kibana的service,端口为5601。

一看必会系列:k8s 练习5 k8s调度给指定node

No Comments Kubernetes

 

查看当前node2信息
[[email protected] elk]#  kubectl describe node k8s-node2 |grep -C 5 Lab
Name:               k8s-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-node2
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:27:fd:0f:47:76"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
[[email protected] elk]#

1.来打个标 

命令如下 kubectl label nodes node名 随便写=随便写1

[[email protected] elk]# kubectl label nodes k8s-node2 mylabel=100
node/k8s-node2 labeled
[[email protected] elk]#  kubectl describe node k8s-node2 |grep -C 10 Lab
Name:               k8s-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-node2
                    mylabel=100    #〈--就是这个东西
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:27:fd:0f:47:76"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.10.71
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
[[email protected] elk]#

2.难后搞个容器试试
vim busybox-pod5.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox-testxx4
  labels:
    name: busybox-pod-lb
spec:
  containers:
  – name: busybox-xxx4
    image: reg.ccie.wang/library/busybox:1.30.1
    command:
    – sleep
    – "3600"
  #使用下面命令进行node选择
  nodeSelector:
    mylabel: "100"

创建
kubectl apply -f busybox-pod5.yaml

验证

[[email protected] busybox]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
busybox-testxx4            1/1     Running   0          54s    10.244.2.88   k8s-node2   <none>           <none>
可以看到node2 上面去了

查看pod信息
[[email protected] busybox]# kubectl describe pod busybox-testxx4
Name:               busybox-testxx4
Labels:             name=busybox-pod-lb
IP:                 10.244.2.88
Containers:
  busybox-xxx4:
    Image:         reg.ccie.wang/library/busybox:1.30.1
    Command:
      sleep
      3600
      #新标标
Node-Selectors:  mylabel=100

Events:
  Type    Reason     Age    From                Message
  —-    ——     —-   —-                ——-
  Normal  Scheduled  4m17s  default-scheduler   Successfully assigned default/busybox-testxx4 to k8s-node2
  Normal  Pulled     4m16s  kubelet, k8s-node2  Container image "reg.ccie.wang/library/busybox:1.30.1" already present on machine
  Normal  Created    4m16s  kubelet, k8s-node2  Created container
  Normal  Started    4m16s  kubelet, k8s-node2  Started container
[[email protected] busybox]#

 

 

 

————报错  没起来
[[email protected] busybox]# kubectl get pod -o wide
NAME                       READY   STATUS        RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
busybox-testxx1            0/1     Pending       0          38m    <none>         k8s-node2   <none>           <none>

[[email protected] busybox]# kubectl describe pod busybox-testxx1
Name:               busybox-testxx1
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node2/
Labels:             name=busybox-pod-lb
Annotations:        <none>
Status:             Pending
IP:                
Containers:
  busybox-xxx1:
    Image:        busybox
Node-Selectors:  mylabel=100

Events:
  Type     Reason     Age                   From                Message
  —-     ——     —-                  —-                ——-
  Normal   Scheduled  3m32s                 default-scheduler   Successfully assigned default/busybox-testxx5 to k8s-node2
  Normal   Pulled     113s (x5 over 3m31s)  kubelet, k8s-node2  Container image "reg.ccie.wang/library/busybox:1.30.1" already present on machine
  Normal   Created    113s (x5 over 3m30s)  kubelet, k8s-node2  Created container
  Normal   Started    113s (x5 over 3m30s)  kubelet, k8s-node2  Started container
  Warning  BackOff    99s (x10 over 3m28s)  kubelet, k8s-node2  Back-off restarting failed container

对于像ubuntu这样的系统级docker ,用k8s集群启动管理后,会自动关闭,
解决方法就是 让其一直在运行,所以在yaml文件中增加command命令即可

原因是这里配错了
apiVersion: v1
kind: Pod
metadata:
  name: busybox-testxx1
  labels:
    name: busybox-pod-lb
spec:
  containers:
  – name: busybox-xxx1
    image: busybox
#需要增?#29992;?#20196;3条
    command:
    – sleep
    – "3600"

  nodeSelector:
    mylabel: "100"
   
   
或者只要有进程运行就?#23567;?
     command: [ "/bin/bash", "-c", "–" ]
     args: [ "while true; do sleep 30; done;" ]

一看必会系列:k8s 练习4 pod滚动升级

No Comments Kubernetes

Pod滚动升级

rolling-update 可用在 滚动升级,回滚,POD重启等操作

正式配置文件在最下面

[[email protected] redis_rolling]#  kubectl rolling-update redis-master-rc-v2 -f redis-master-rc-v4.yaml
Command "rolling-update" is deprecated, use "rollout" instead

第一阶段
Created redis-master-rc-v4  #开?#21363;?#24314; redis-master-rc-v4
第二阶段
Scaling up redis-master-rc-v4 from 0 to 4,  #将 redis-master-rc-v4 加到4个副本
scaling down redis-master-rc-v2 from 2 to 0  #将 redis-master-rc-v2 从2减到0
(keep 4 pods available, don’t exceed 5 pods)
第三阶段
Scaling redis-master-rc-v4 up to 3  # redis-master-rc-v4 加到3
Scaling redis-master-rc-v2 down to 1 #V2 减到1
Scaling redis-master-rc-v4 up to 4  #V4 加到4
Scaling redis-master-rc-v2 down to 0 #V2 减到0
第4阶段
Update succeeded.
Deleting old controller: redis-master-rc-v2 #更新成功?#22659;?#25481;的rc-v2
Renaming redis-master-rc-v4 to redis-master-rc-v2  #将V4改名为V2
replicationcontroller/redis-master-rc-v2 rolling updated #升级完成
完成

验证
[[email protected] ~]# kubectl get rc,pod
NAME                                       DESIRED   CURRENT   READY   AGE
replicationcontroller/redis-master-rc-v2   2         2         2       11m
#刚执行,第1-2阶段,两个RC同时存在
replicationcontroller/redis-master-rc-v4   3         3         3       50s

NAME                           READY   STATUS    RESTARTS   AGE
#刚执行,第1-2阶段,两个RC同时存在
pod/redis-master-rc-v3-69rjr   1/1     Running   0          12m
pod/redis-master-rc-v3-vhnbs   1/1     Running   0          12m
pod/redis-master-rc-v4-cp9h8   1/1     Running   0          50s
pod/redis-master-rc-v4-jfhcx   1/1     Running   0          50s
pod/redis-master-rc-v4-nqqkp   1/1     Running   0          50s

过?#38382;?#38388;之后 #第3-4阶段
[[email protected] ~]# kubectl get rc,pod
NAME                                       DESIRED   CURRENT   READY   AGE
#只剩一个了
replicationcontroller/redis-master-rc-v2   4         4         4       11m

NAME                           READY   STATUS      RESTARTS   AGE
只剩一个了
pod/redis-master-rc-v4-cp9h8   1/1     Running     0          14m
pod/redis-master-rc-v4-h5f6r   1/1     Running     0          13m
pod/redis-master-rc-v4-jfhcx   1/1     Running     0          14m
pod/redis-master-rc-v4-nqqkp   1/1     Running     0          14m
[[email protected] ~]# kubectl get rc,pod

使用命令进行 rolling-update 正常升级的结果是 原来的pod和rc会被替换掉,
新rc-v4会重命名为原来的rc-v2,保持服务正常

 

下面是直接用命令升级 kubectl rolling-update  现在的RC –image=要升级的镜像
[[email protected] ~]# kubectl rolling-update redis-master-rc  –image=kubeguide/redis-master:1.0
Command "rolling-update" is deprecated, use "rollout" instead
Found existing update in progress (redis-master-rc-v2), resuming.
Continuing update with existing controller redis-master-rc-v2.
Scaling up redis-master-rc-v2 from 1 to 1, scaling down redis-master-rc from 1 to 0 (keep 1 pods available, don’t exceed 2 pods)
Scaling redis-master-rc down to 0
Update succeeded. Deleting redis-master-rc
replicationcontroller/redis-master-rc-v2 rolling updated to "redis-master-rc-v2"

验证
[[email protected] yaml]# kubectl get rc,pod
NAME                                       DESIRED   CURRENT   READY   AGE

replicationcontroller/redis-master-rc-v2   1         1         1       19m 

NAME                           READY   STATUS    RESTARTS   AGE
pod/redis-master-rc-v2-2rc4x   1/1     Running   0          19m  #

[[email protected] yaml]# kubectl describe pod/redis-master-rc-v2-2rc4x
Name:               redis-master-rc-v2-2rc4x
Namespace:          default
Node:               k8s-node1/192.168.10.69
Labels:             deployment=4e423afb21f081b285503ab911a2c748
                    name=redis-master-lb
Containers:
  master:
    Image:          kubeguide/redis-master:1.0

Events:
  Type    Reason     Age   From                Message
  —-    ——     —-  —-                ——-
  Normal  Scheduled  28m   default-scheduler   Successfully assigned default/redis-master-rc-v2-2rc4x to k8s-node1
  Normal  Pulling    28m   kubelet, k8s-node1  pulling image "kubeguide/redis-master:1.0"
  Normal  Pulled     28m   kubelet, k8s-node1  Successfully pulled image "kubeguide/redis-master:1.0"
  Normal  Created    28m   kubelet, k8s-node1  Created container
  Normal  Started    28m   kubelet, k8s-node1  Started container
[[email protected] yaml]#

———–配置方式1------------
配置文件v2
redis-master-rc-v2.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc-v2  #升级的不能一样
  labels:
    names: redis-master-lb-2   #升级的不能一样
spec:
  replicas: 2
  selector:
    name: redis-master-lb-2 #升级的不能一样
  template:
    metadata:
      labels:
        name: redis-master-lb-2  #升级的不能一样
    spec:
     containers:
     – name: master
       image: kubeguide/redis-master:1.0  #升级的不能一样
       ports:
       – containerPort: 6379
      
配置v4
redis-master-rc-v4.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc-v4  #升级的不能和v2一样
  labels:
    names: redis-master-lb-4 #升级的不能和v2一样
spec:
  replicas: 4
  selector:
    name: redis-master-lb-4 #升级的不能和v2一样
  template:
    metadata:
      labels:
        name: redis-master-lb-4  #升级的不能和v2一样
    spec:
     containers:
     – name: master
       image: kubeguide/redis-master:latest  #升级的不能和v2一样
       ports:
       – containerPort: 6379

———–配置方式2------------
配置文件v2
redis-master-rc-v2.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc-v2  #升级的不能一样
  labels:
    names: redis-master-lb  
    ver: v1                 #建rc的时候这个就要存在,否则后面升级会报错
spec:
  replicas: 2
  selector:
    name: redis-master-lb
    ver: v1                #建rc的时候这个就要存在,否则后面升级会报错

  template:
    metadata:
      labels:
        name: redis-master-lb-2
        ver: v1            #建rc的时候这个就要存在,否则后面升级会报错

    spec:
     containers:
     – name: master
       image: kubeguide/redis-master:1.0  #升级的不能一样
       ports:
       – containerPort: 6379
      
配置v4
redis-master-rc-v4.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc-v4 
  labels:
    names: redis-master-lb-4
    ver: v2                #这个原来的版本就要存在,本?#25105;?#19981;一样,否则升级会报错
spec:
  replicas: 4
  selector:
    name: redis-master-lb
    ver: v2                #这个原来的版本就要存在,本?#25105;?#19981;一样,否则升级会报错

  template:
    metadata:
      labels:
        name: redis-master-lb
        ver: v2                #这个原来的版本就要存在,本?#25105;?#19981;一样,否则升级会报错
    spec:
     containers:
     – name: master
       image: kubeguide/redis-master:latest  #升级的不能和v2一样
       ports:
       – containerPort: 6379

 

————-升级有误 进行回滚 kubectl rolling-update RC名 –image=镜像 –rollback
[[email protected] redis_rolling]# kubectl rolling-update redis-master-rc-v2 –image=kubeguide/redis-master:1.0 –rollback
Command "rolling-update" is deprecated, use "rollout" instead
error: Don’t specify –filename or –image on rollback
See ‘kubectl rolling-update -h’ for help and examples.
[[email protected] redis_rolling]#

升级正常就不能用这种方式进行回滚

———–报错
[[email protected] yaml]# kubectl rolling-update redis-master-rc -f redis-master-rc-v2.yaml
Command "rolling-update" is deprecated, use "rollout" instead
error: redis-master-rc-v2.yaml contains a /v1, Kind=ReplicationController not a ReplicationController
问题和解决方法看 配置方式2------------

报错2
[[email protected] redis_rolling]# kubectl rolling-update redis-master-rc-v2 -f redis-master-rc-v3.yaml
Command "rolling-update" is deprecated, use "rollout" instead
error: redis-master-rc-v3.yaml must specify a matching key with non-equal value in Selector for redis-master-rc-v2
[[email protected] redis_rolling]# !v
说明
must specify a matching key with non-equal value in Selector for redis-master-rc-v2
这里报错说明是配置文件的问题,
 
问题和解决方法看 配置方式2------------

k8s pod模版

No Comments Kubernetes

pod模版

apiVersion: v1                  #必选,版本号,例如v1,版本号必须可以用 kubectl api-versions 查询到 .
kind: Pod                #必选,Pod
metadata:                #必选,元数据
  name: string                  #必选,Pod名称
  namespace: string             #必选,Pod所属的命名空间,默认为"default"
  labels:                 #自定义标签
    – name: string                #自定义标签名字
  annotations:                         #自定义注释列表
    – name: string
spec:                     #必选,Pod中容器的详细定义
  containers:                   #必选,Pod中容器列表
  – name: string                      #必选,容器名称,需符合RFC 1035规范
    image: string                     #必选,容器的镜像名称
    imagePullPolicy: [ Always|Never|IfNotPresent ]  #获取镜像的策略 Alawys表示下载镜像 IfnotPresent表示优先使用本地镜像,否则下载镜像,Nerver表示仅使用本地镜像
    command: [string]             #容器的启动命令列表,如不指定,使用打包时使用的启动命令
    args: [string]                   #容器的启动命令?#38382;?#21015;表
    workingDir: string                     #容器的工作目录
    volumeMounts:             #挂载到容器内部的存储卷配置
    – name: string              #引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名
      mountPath: string                 #存储卷在容器内mount的绝?#26376;?#24452;,应少于512字符
      readOnly: boolean                 #是否为只读模式
    ports:                #需要暴露的端口库号列表
    – name: string              #端口的名称
      containerPort: int                #容器需要监听的端口号
      hostPort: int                  #容器所在主机需要监听的端口号,默认与Container相同
      protocol: string                  #端口协议,支持TCP和UDP,默认TCP
    env:                    #容器运行前需设置的环境变量列表
    – name: string                  #环境变量名称
      value: string                 #环境变量的值
    resources:                        #?#35797;?#38480;制和请求的设置
      limits:                   #?#35797;?#38480;制的设置
        cpu: string                 #Cpu的限制,单位为core数,将用于docker run –cpu-shares?#38382;?
        memory: string                  #内存限制,单位可以为Mib/Gib,将用于docker run –memory?#38382;?
      requests:                       #?#35797;?#35831;求的设置
        cpu: string                 #Cpu请求,容器启动的初始可用数量
        memory: string                    #内存请求,容器启动的初始可用数量
    livenessProbe:                  #对Pod内各容器健康检查的设置,当探测无响应几次后将自动重启该容器,检查方法有exec、httpGet和tcpSocket,对一个容器只需设置其中一种方法即可
      exec:               #对Pod容器内检查方式设置为exec方式
        command: [string]               #exec方式需要制定的命令或脚本
      httpGet:                #对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port
        path: string
        port: number
        host: string
        scheme: string
        HttpHeaders:
        – name: string
          value: string
      tcpSocket:      #对Pod内个容器健康检查方式设置为tcpSocket方式
         port: number
       initialDelaySeconds: 0       #容器启动完成后?#29366;?#25506;测的时间,单位为秒
       timeoutSeconds: 0        #对容器健康检查探测等待响应的超时时间,单位秒,默认1秒
       periodSeconds: 0         #对容器监控检查的定期探测时间设置,单位秒,默认10秒一次
       successThreshold: 0
       failureThreshold: 0
       securityContext:
         privileged: false
    restartPolicy: [Always | Never | OnFailure] #Pod的重启策略,Always表示一旦不管以何种方式终止运行,kubelet都将重启,OnFailure表示只有Pod以非0退出码退出才重启,Nerver表示不再重启该Pod
    nodeSelector: obeject       #设置NodeSelector表示将该Pod调度到包含这个label的node上,以key:value的格式指定
    imagePullSecrets:     #Pull镜像时使用的secret名称,以key:secretkey格式指定
    – name: string
    hostNetwork: false          #是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络
    volumes:            #在该pod上定义共享存储卷列表
    – name: string         #共享存储卷名称 (volumes类型有很多种)
      emptyDir: {}          #类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值
      hostPath: string          #类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录
        path: string              #Pod所在宿主机的目录,将被用于同期中mount的目录
      secret:           #类型为secret的存储卷,挂载集群与定义的secre对象到容器内部
        scretname: string 
        items:    
        – key: string
          path: string
      configMap:                  #类型为configMap的存储卷,挂载预定义的configMap对象到容器内部
        name: string
        items:
        – key: string
          path: string

30选5玩法 彩票河南风采22选5