30选5玩法|福彩30选5开奖结果321|
 

日期:2019年4月4日

一看必会系列:k8s 练习10 日志收集 elk fluentd实战

No Comments Kubernetes

从官方下载对应yaml
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

-rw-r–r–  1 root root      382 Apr  3 23:28 es-service.yaml
-rw-r–r–  1 root root     2900 Apr  4 04:15 es-statefulset.yaml
-rw-r–r–  1 root root    16124 Apr  3 23:28 fluentd-es-configmap.yaml
-rw-r–r–  1 root root     2717 Apr  4 06:19 fluentd-es-ds.yaml
-rw-r–r–  1 root root     1166 Apr  4 05:46 kibana-deployment.yaml
-rw-r–r–  1 root root      272 Apr  4 05:27 kibana-ingress.yaml  #这个在后面
-rw-r–r–  1 root root      354 Apr  3 23:28 kibana-service.yaml

特别注意,一定要按照yaml里的文件来下载image不然会有各种错

先执行这个
kubectl create -f fluentd-es-configmap.yaml
configmap/fluentd-es-config-v0.2.0 created

再执行
[[email protected] elk]# kubectl create -f fluentd-es-ds.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v2.5.0 created

[[email protected] elk]# kubectl get pod -n kube-system |grep flu
fluentd-es-v2.5.0-hjzw8                 1/1     Running   0          19s
fluentd-es-v2.5.0-zmlm2                 1/1     Running   0          19s
[[email protected] elk]#

再启动elasticsearch
[[email protected] elk]# kubectl create -f es-statefulset.yaml
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
[[email protected] elk]# kubectl create -f es-service.yaml
service/elasticsearch-logging created
[[email protected] elk]#

[[email protected] elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0                 1/1     Running   0          11s
elasticsearch-logging-1                 1/1     Running   0          8s
[[email protected] elk]#

再高动 kibana/kibana
kubectl create -f kibana-deployment.yaml
kubectl get pod -n kube-system
kubectl create -f kibana-service.yaml

验证
[[email protected] elk]# kubectl get pod,svc -n kube-system |grep kiba
pod/kibana-logging-65f5b98cf6-2p8cj         1/1     Running   0          46s

service/kibana-logging          ClusterIP   10.100.152.68   <none>        5601/TCP        21s
[[email protected] elk]#

查看集群信息
[[email protected] elk]# kubectl cluster-info
Elasticsearch is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

因为只开了 容器端口,在外部机器上是无法访问的。有以下几种方法来访问

1.开proxy  在master上开
#这玩意是前台执行的,退出后就没了。–address 是master的Ip 实际上哪台上面都行
kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’

如需后台运?#23567;?#20351;用。 nohup  kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’ *
在master上查看端口是否开启
netstat -ntlp |grep 80
tcp        0      0 192.168.10.68:2380      0.0.0.0:*               LISTEN      8897/etcd          
tcp        0      0 192.168.10.68:8085      0.0.0.0:*               LISTEN      16718/kubectl  

直接浏览器验证
http://192.168.10.68:8085/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/home/tutorial_directory/sampleData?_g=()
出页面即正常

进去kibana后操作出图
1.点击左边management
2. 建立index Create index pattern
3. 输入* 查看具体的日志名
4.  例如 logstash-2019.03.25 ,改成logstash-* 下一步到完成
4.1 一定要把那个 星星点一下, 设为index默认以logstash-*
5. discover 就可以看到日志了

验证结果,以下为正常,没有https 要注意
curl http://192.168.10.68:8085/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/
{
  "name" : "bc30CKf",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "C3oV5BnMTByxYltuuYjTjg",
  "version" : {
    "number" : "6.7.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "8453f77",
    "build_date" : "2019-03-21T15:32:29.844721Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

方法二:

[[email protected] elk]# kubectl get ingress -n kube-system -o wide
NAME             HOSTS           ADDRESS   PORTS   AGE
kibana-logging   elk.ccie.wang             80      6m42s

可以是可以。但是会报 404 这个需要再查下问题在哪

------中间广告---------

创建ingress
配置文件如下 kibana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana-logging-ingress
  namespace: kube-system
spec:
  rules:
  – host: elk.ccie.wang
    http:
      paths:
      – path: /
        backend:
          serviceName: kibana-logging
          servicePort: 5601

kubectl create -f kibana-ingress.yaml

 

验证文件信息
[[email protected] elk]# kubectl get -f fluentd-es-ds.yaml
NAME                        SECRETS   AGE
serviceaccount/fluentd-es   1         85s

NAME                                               AGE
clusterrole.rbac.authorization.k8s.io/fluentd-es   85s

NAME                                                      AGE
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es   85s

NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/fluentd-es-v2.5.0   2         2         2       2            2           <none>          85s
[[email protected] elk]#

----------报错

[[email protected] elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0                 0/1     ErrImagePull   0          71s
[[email protected] elk]#
拉境像报错

      containers:
      #将下面改成
      #- image: gcr.io/fluentd-elasticsearch/elasticsearch:v6.6.1
      – image: reg.ccie.wang/library/elk/elasticsearch:6.7.0

 

 

 

—————-知识扩展
1. fluentd
怎么使用这个镜像

docker run -d -p 24224:24224 -p 24224:24224/udp -v /data:/fluentd/log fluent/fluentd:v1.3-debian-1

默认的配置如下
监听端口 24224
存储标记为 docker.** 到 /fluentd/log/docker.*.log (and symlink docker.log)
存储其它日志到 /fluentd/log/data.*.log (and symlink data.log)

当然也能自定议?#38382;?/p>

docker run -ti –rm -v /path/to/dir:/fluentd/etc fluentd -c /fluentd/etc/配置文件 -v

第一个-v 挂载/path/to/dir到容器里的/fluentd/etc

-c 前的是容器名 告诉 fluentd去哪找这个配置文件
第二个-v 传递详细的配置信息给 fluented

切换运行?#27809;?foo

docker run -p 24224:24224 -u foo -v …

一看必会系列:k8s 练习9 ingress ssl https 多证书实战

No Comments Kubernetes

ingress nginx https ssl多证书
创建私有证书
# openssl req -x509 -nodes -days 365 \
-newkey rsa:2048 -keyout xxx.yyy.key \
-out xxx.yyy.crt \
-subj “/CN=*.xxx.yyy/O=xxx.yyy”
方案1.每个证书对应一个 name? #官方推荐
[[email protected] ssl]# kubectl create secret tls tls.ccie.wang –key ccie.wang.key –cert ccie.wang.crt
[[email protected] ssl]# kubectl create secret tls tls.xxx.yyy –key xxx.yyy.key –cert xxx.yyy.crt

查看证书
[[email protected] ssl]# kubectl get secret
NAME????????????????? TYPE????????????????????????????????? DATA?? AGE
default-token-tkfmx?? kubernetes.io/service-account-token?? 3????? 30d
tls.ccie.wang???????? kubernetes.io/tls???????????????????? 2????? 78m
tls.xxx.yyy?????????? kubernetes.io/tls???????????????????? 2????? 12s
[[email protected] ssl]#
创建ingress https服务
[[email protected] ssl]# kubectl apply -f xxx.yyy.yaml
ingress.extensions/nginx-xxx-yyy-test created

查看ingress状态
[[email protected] ssl]# kubectl get ingress
NAME?????????????????? HOSTS????????????? ADDRESS?? PORTS???? AGE
ingress-nginx-test???? in2.ccie.wang??????????????? 80??????? 23h
nginx-ccie-wang-test?? in4ssl.ccie.wang???????????? 80, 443?? 37m #自动生成80、443端口
nginx-xxx-yyy-test???? in4ssl.xxx.yyy?????????????? 80, 443?? 9s
[[email protected] ssl]#
验证
[email protected]:/etc/nginx/conf.d# curl -s https://in4ssl.xxx.yyy -k |head -5
<html ng-app=”redis”>
<head>
<title>Guestbook</title>
<link rel=”stylesheet” href=”bootstrap.min.css”>
<script src=”angular.min.js”></script>
[email protected]:/etc/nginx/conf.d#
方案2.所有证书对应一个namE 测试不可用
#将两个域名证书放到一个secret里
# kubectl create secret generic tow-cert \
–from-file=ccie.wang.key? \
–from-file=ccie.wang.crt? \
–from-file=xxx.yyy.key? \
–from-file=xxx.yyy.crt -n default

查看Secret
[[email protected] ssl]# kubectl describe secret tow-cert
Name:???????? tow-cert
Namespace:??? default
Labels:?????? <none>
Annotations:? <none>

Type:? Opaque

Data
#包含两个证书
ccie.wang.crt:? 3622 bytes
ccie.wang.key:? 1732 bytes
xxx.yyy.crt:??? 1143 bytes
xxx.yyy.key:??? 1704 bytes
实际验证发现 证书信息是不对的。而且证书加载的是default-fake-certificate.pem
可能需要confitmap进行挂载,但这样比单独配置证书更麻烦
正常应该是 tow-cert
ssl_certificate???????? /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key???? /etc/ingress-controller/ssl/default-fake-certificate.pem;

————–报错
[email protected]:/etc/nginx/conf.d# curl https://!$
curl https://in4ssl.xxx.yyy
curl: (60) SSL certificate problem: self signed certificate
More details here: https://172.16.0.168/api/v4/projects?search=xxxx -k

wget ‘https://172.16.0.168/api/v4/projects?search=xxxx -k’ –no-check-certificate

一看必会系列:k8s 练习8 ingress https ssl

No Comments 网络技术

 

k8s ingress-nginx ssl

生成证书
https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md

生成证书和key
$ openssl req -x509 -sha256 -nodes -days 365 \
         -newkey rsa:2048 \
         -keyout tls.key \
         -out tls.crt \
         -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
…………….+++
…………….+++
writing new private key to ‘tls.key’
—–

命令格式
kubectl create secret tls ${CERT_NAME} –key ${KEY_FILE} –cert ${CERT_FILE}

制作secret,后面在容器里用 tls名字进行挂载
$ kubectl create secret tls tls-secret –key tls.key –cert tls.crt
secret "tls-secret" created  #引号里为名字

使用真?#26723;膆ttps证书
[[email protected] ssl]# ll
total 16
-rw-r–r– 1 root root 3622 Mar  4 07:17 ccie.wang.crt    -证书
-rw-r–r– 1 root root 1732 Jan 28 16:41 ccie.wang.key    -key
-rw-r–r– 1 root root  308 Apr  2 04:57 ccie.wang.yaml
-rw-r–r– 1 root root 2992 Mar 31 19:38 multi-tls.yaml
[[email protected] ssl]# kubectl create secret tls tls.ccie.wang –key ccie.wang.key –cert ccie.wang.crt
secret/tls.ccie.wang created  #生成密文成功

配置ingress service

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ccie-wang-test
spec:
  tls:
    – hosts:
        #写成其它host也能?#23567;?#19981;知为啥
      – in4ssl.ccie1.wang
      secretName: tls.ccie.wang
  rules:
      #这个host一定要对,这个对应nginx的 server_name
    – host: in4ssl.ccie.wang
      http:
        paths:
        – path: /
          backend:
             #service必须先存在,端口也要存在
            serviceName: frontend-svc
             #启动后会自动开启443端口,不用定义
            servicePort: 80

验证:
进入ingress的pod查看配置
kubectl exec nginx-ingress-controller-7966d94d6c-8prth \
-n ingress-nginx — cat /etc/nginx/nginx.conf > nginx.conf

直接请求https页面,出现如下即正常
[[email protected] ssl]# curl -s https://in4ssl.ccie.wang |head -5
<html ng-app="redis">
  <head>
    <title>Guestbook</title>
    <link rel="stylesheet" href="bootstrap.min.css">
    <script src="angular.min.js"></script>
[[email protected] ssl]#

直接请求http页面,发现被重定向即为正常
[[email protected] ssl]# curl in4ssl.ccie.wang
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.15.9</center>
</body>
</html>

 

————–知识扩展

https://kubernetes.github.io/ingress-nginx/user-guide/tls/

TLS Secrets?
Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.

You can generate a self-signed certificate and private key with:

生成证书
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}"
Then create the secret in the cluster via:

创建密文
kubectl create secret tls ${CERT_NAME} –key ${KEY_FILE} –cert ${CERT_FILE}
The resulting secret will be of type kubernetes.io/tls.

Default SSL Certificate?
NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works without out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required.

For this reason the Ingress controller provides the flag –default-ssl-certificate. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate.

For instance, if you have a TLS secret foo-tls in the default namespace, add –default-ssl-certificate=default/foo-tls in the nginx-controller deployment.

SSL Passthrough?
The –enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects.

Warning

This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client.

If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend.

Note

Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.

HTTP Strict Transport Security?
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.

HSTS is enabled by default.

To disable this behavior use hsts: "false" in the configuration ConfigMap.

Server-side HTTPS enforcement through redirect?
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.

This can be disabled globally using ssl-redirect: "false" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.

Tip

When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource.

Automated Certificate Management with Kube-Lego?
Tip

Kube-Lego has reached end-of-life and is being replaced by cert-manager.

Kube-Lego automatically requests missing or expired certificates from Let’s Encrypt by monitoring ingress resources and their referenced secrets.

To enable this for an ingress resource you have to add an annotation:

kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true"
To setup Kube-Lego you can take a look at this full example. The first version to fully support Kube-Lego is Nginx Ingress controller 0.8.

Default TLS Version and Ciphers?
To provide the most secure baseline configuration possible,

nginx-ingress defaults to using TLS 1.2 only and a secure set of TLS ciphers.

Legacy TLS?
The default configuration, though secure, does not support some older browsers and operating systems.

For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with nginx-ingress’s default configuration.

To change this default behavior, use a ConfigMap.

A sample ConfigMap fragment to allow these older clients to connect could look something like the following:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
data:
  ssl-ciphers: "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA"
  ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2"

一看必会系列:k8s 练习7 部署ingress-nginx

No Comments Kubernetes

安装ingress 服务
官方地址
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

直接运行
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

修改 mandatory.yaml 这个文件才?#23567;?#20462;改with-rbac.yaml没用,一共修改2处
vim mandatory.yaml

188 apiVersion: apps/v1
189 kind: Deployment
190 metadata:
191   name: nginx-ingress-controller
192   namespace: ingress-nginx
193   labels:
194     app.kubernetes.io/name: ingress-nginx
195     app.kubernetes.io/part-of: ingress-nginx
196 spec:
       #改成2,同时运行两个
197   replicas: 2

210     spec:
       #增加hostNetwork: true,目的是开放host主机?#31995;?#23545;应端口,
       #具体端口在配置service时候进行定义
211       hostNetwork: true
212       serviceAccountName: nginx-ingress-serviceaccount
213       containers:
214         – name: nginx-ingress-controller
215           image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0

 

运行
[[email protected] ingress]# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

#查看状态
[[email protected] ingress]# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
nginx-ingress-controller-7966d94d6c-8prth   1/1     Running   0          19m   192.168.10.71   k8s-node2   <none>           <none>
nginx-ingress-controller-7966d94d6c-w5btd   1/1     Running   0          19m   192.168.10.69   k8s-node1   <none>           <none>
[[email protected] ingress]#

 

需要访问的服务
[[email protected] ingress]# kubectl get svc |grep fr
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
frontend-svc     NodePort    10.100.151.156   <none>        80:30011/TCP   6d1h
[[email protected] ingress]#

 

vim frontend-svc.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx-test
spec:
  rules:
  – host: in1.ccie.wang
    http:
      paths:
      – path: /
        backend:
          serviceName: frontend-svc
          #这里?#24039;?#38754;服务的端口用kubectl get pods 进行查看
          #意思是将请求转发到 frontend-svc 的80端口,和nginx 的upstream 一样
          servicePort: 80

#查看生成的时否正常
[[email protected] ingress]# kubectl get ingress
NAME                 HOSTS           ADDRESS   PORTS   AGE
ingress-nginx-test   in1.ccie.wang             80      5m55s

 

查看node上对应的 80 端口是否已生成
[[email protected] ~]# netstat -ntlp |grep :80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      10319/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      10319/nginx: master
[[email protected] ~]#
[[email protected] ~]# netstat -ntlp |grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      12085/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      12085/nginx: master
[[email protected] ~]#

后在 master上测试,正常
[[email protected] ingress]# curl -s in2.ccie.wang |head -3
<html ng-app="redis">
  <head>
    <title>Guestbook</title>

 

 

 

 

   
——————-报错   
   
   
[[email protected] ingress]# kubectl create -f frontend-svc.yaml
The Ingress "ingress-myServiceA" is invalid: metadata.name: Invalid value: "ingress-myServiceA":
a DNS-1123 subdomain must consist of lower case alphanumeric characters, ‘-‘ or ‘.’,
and must start and end with an alphanumeric character
(e.g. ‘example.com’, regex used for validation is ‘[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*’)

解决
metadata.name 不能有大写。改成
vim frontend-svc.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
#name不能有大写。改成
  name: ingress-nginx-test
spec:
  rules:
  – host: in1.ccie.wang
    http:
      paths:
      – path: /web
        backend:
          serviceName: frontend-svc
          servicePort: 80
~                         
---------报错2
测试但不能访问
[[email protected] ~]# curl in1.ccie.wang/wen
curl: (7) Failed connect to in1.ccie.wang:80; Connection refused
[[email protected] ~]# curl in1.ccie.wang/web
curl: (7) Failed connect to in1.ccie.wang:80; Connection refused
[[email protected] ~]#

进入系统查看
[[email protected] ingress]# kubectl exec -it nginx-ingress-controller-7966d94d6c-8prth -n ingress-nginx
/bin/bash
查看配置,正常
cat /etc/nginx/nginx.config

ping测试,现在解析错了。解析到k8s-master上了,应该解析到 node上面
[[email protected] ingress]# ping in1.ccie.wang
PING in1.ccie.wang (192.168.10.68) 56(84) bytes of data.
64 bytes from k8s-master (192.168.10.68): icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from k8s-master (192.168.10.68): icmp_seq=2 ttl=64 time=0.033 ms
^C

修改解析后在 master上测试,正常
[[email protected] ingress]# curl -s in2.ccie.wang |head -3
<html ng-app="redis">
  <head>
    <title>Guestbook</title>

————细节?#30001;?/p>

https://github.com/kubernetes/ingress-nginx/blob/master/README.md
https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/

ingress-nginx文件位于deploy目录下,各文件的作用:

configmap.yaml:提供configmap可以在线更行nginx的配置
default-backend.yaml:提供一个缺省的后台错误页面 404
namespace.yaml:创建一个独立的命名空间 ingress-nginx
rbac.yaml:创建对应的role rolebinding 用于rbac
tcp-services-configmap.yaml:修改L4负载均衡配置的configmap
udp-services-configmap.yaml:修改L4负载均衡配置的configmap
with-rbac.yaml:有应用rbac的nginx-ingress-controller组件

官方安装方式
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

 

https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

 

Via the host network?
In a setup where there is no external load balancer available but using NodePorts is not an option,
one can configure ingress-nginx Pods to use the network of the host they run on instead of
a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller
can bind ports 80 and 443 directly to Kubernetes nodes’ network interfaces,
without the extra network translation imposed by NodePort Services.

his can be achieved by enabling the hostNetwork option in the Pods’ spec.

template:
  spec:
    hostNetwork: true
   
   
   
其中:

rules中的host必须为域名,不能为IP,表示Ingress-controller的Pod所在主机域名,也就是Ingress-controller的IP对应的域名。
paths中的path则表示?#25104;?#30340;路径。如?#25104;?表示若访问myk8s.com,则会将请求转发至Kibana的service,端口为5601。

30选5玩法