凯~
文章11
标签12
分类4
k8s公网方案

k8s公网方案

k8s公网方案

一、应用准备

先在k8s集群部署一个简单的flask应用

1.构建镜像

首先拉取基础镜像

拉取基础镜像
docker pull python:3.9

下面是应用的代码,简单起见,只有一个文件app.py

1
2
3
4
5
6
7
8
9
10
from flask import Flask, make_response, request

app = Flask(__name__)

@app.route('/')
def index():
return "Hello, World!"

if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)

依赖的第三方包requirements.txt

1
Flask==2.0.3

将上面两个文件跟dockerfile放在一个目录

Dockerfile:

1
2
3
4
5
6
7
FROM python:3.9
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
COPY app.py /usr/src/app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD [ "python", "app.py"]

重新构建镜像

1
docker build -t  testk8s/flask:2.0.3 .

注意:请确保每个work node节点(非master节点)的docker都部署了testk8s/flask:2.0.3镜像,这样才能在后续的k8s拉取镜像环节中不出错。

2.k8s部署pod

使用如下命令生成pod模板

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: flask-app
name: flask-app
spec:
containers:
- image: testk8s/flask:2.0.3
name: flask-app
imagePullPolicy: Never
ports:
- containerPort: 5000
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always

通过kubectl apply -f flask-app.yaml创建pod,通过kubectl get po查看pod的状态

但这时应用虽然起来了,却只能在pod内部访问

我们可以给pod生成一个service

1
kubectl expose pod flask-app --port=5000 --target-port=5000 --dry-run=client -o yaml

创建server

1
kubectl apply -f flask-rc-nodePort.yaml

最终的service配置,使用NodePort方式直接暴露端口,默认端口范围30000-32767,pod可以直接通过master ip:port访问,例如192.168.32.120:30009

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat > flask-rc-nodePort.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: flask-app
name: flask-app
spec:
ports:
- nodePort: 30009
port: 5000
protocol: TCP
targetPort: 5000
selector:
run: flask-app
type: NodePort
status:
loadBalancer: {}
EOF

这样就可以在集群外访问pod了。

二、ingress-nginx介绍
1.ingress-nginx概述

在 k8s集群当中可以通过 Service 来对外暴露 k8s 集群当中的 pod 服务,从而使得外部请求能够访问 k8s 集群当中的 pod,而 Service 有多种暴露方式,常用的有 ClusterIP、NodePort、ingress。

(1) 第一种暴露方式 clusterIP 针对集群内部的端口访问,通过标签的方式将 pod 和 svc 进行关联,访问 svc 时 svc 通过策略来将请求发送至各个 pod ,只限于集群内部访问;

(2) 第二种暴露方式就是 NodePort ,该方式是基于 ClusterIP 暴露端口方式之上对集群外部进行暴露访问的一种方法,默认端口是从 30000 开始,此时集群外部就可以通过 master节点ip + 端口的方式对集群内部的 pod 服务进行访问,请求会先到达 NodePort 暴露的 ip + 端口,随后 NodePort 会和集群内的 ClusterIP 进行映射从而使得外部服务访问集群内的 pod 资源;

(3) 第三种暴露方式就是 ingress 方式,该访问方式也是基于 ClusterIP 之上建立一个暴露方式, nodeport 只能够暴露 ip+端口(四层代理) 而 ingress-nginx 可以使用域名(七层代理)的方式对外暴露服务,ingress-nginx 就相当于是一个 nginx 服务的反向代理,当创建 ingress-nginx 之后会创建一个 nginx 服务(ingress-nginx 的 pod 容器),其中 Pod 中容器 nginx.conf 配置文件则记录的是对外暴露的域名以及监听端口是都是通过 ingress 的配置自动添加的,使用 location 中的 proxy-pass 来代理 k8s 集群中的 cluster 暴露的 ip,当外部请求访问到 ingress 暴露的域名时会反向代理到对应匹配的 svc 从而实现服务的访问;

“一、应用准备”部分使用的就是第二种方式;

为何要使用ingress-nginx

防止纯nodeport极大的占用宿主机的端口,每个应用都会占用一个端口,而通过ingress-nginx代理,无论暴露多少应用都只占用2个端口,并在集群内实现了和nginx类似的域名区分功能

2.ingress-nginx部署方式

ingress的部署,需要考虑两个方面:

ingress-controller是作为pod来运行的,以什么方式部署比较好
ingress解决了把如何请求路由到集群内部,那它自己怎么暴露给外部比较好
下面列举一些目前常见的部署和暴露方式,具体使用哪种方式还是得根据实际需求来考虑决定。

(1) Deployment+LoadBalancer模式的Service
如果要把ingress部署在公有云,那用这种方式比较合适。用Deployment部署ingress-controller,创建一个type为LoadBalancer的service关联这组pod。大部分公有云,都会为LoadBalancer的service自动创建一个负载均衡器,通常还绑定了公网地址。只要把域名解析指向该地址,就实现了集群服务的对外暴露。

(2) Deployment+NodePort模式的Service
同样用deployment模式部署ingress-controller,并创建对应的服务,但是type为NodePort。这样,ingress就会暴露在集群节点ip的特定端口上。由于nodeport暴露的端口是随机端口,一般会在前面再搭建一套负载均衡器来转发请求。该方式一般用于宿主机是相对固定的环境ip地址不变的场景。
NodePort方式暴露ingress虽然简单方便,但是NodePort多了一层NAT,在请求量级很大时可能对性能会有一定影响。

(3) DaemonSet+HostNetwork+nodeSelector
用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/433端口就能访问服务。这时,ingress-controller所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器。该方式整个请求链路最简单,性能相对NodePort模式更好。缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。比较适合大并发的生产环境使用。

而我们采用的是DaemonSet+HostNetwork+nodeSelector+NodePort

因为经过测试(2) 方法通过NodePort方式暴露ingress的端口,也是只能通过宿主机的ip和端口访问,而(3) 方式对于高可用的ingress是有必要的,因为ingress的宿主机一旦宕机,会造成集群应用的不可用,而通过(3) 方式配合keepalived部署多个边缘节点可以解决高可用的问题。

(3) 方式其实还是需要用到NodePort,因为ingress的deploy.yaml配置文件里定义为NodePort模式,可以将ingress使用的2个端口映射到宿主机的对外端口,故接下来的步骤还是按DaemonSet+HostNetwork+nodeSelector+NodePort的方式来。

三、ingress-nginx部署
1.选择ingress-nginx版本

访问https://github.com/kubernetes/ingress-nginx可以查看到兼容列表

Ingress-NGINX version k8s supported version Alpine Version Nginx Version
v1.7.0 1.26, 1.25, 1.24 3.17.2 1.21.6
v1.6.4 1.26, 1.25, 1.24, 1.23 3.17.0 1.21.6
v1.5.1 1.25, 1.24, 1.23 3.16.2 1.21.6
v1.4.0 1.25, 1.24, 1.23, 1.22 3.16.2 1.19.10†
v1.3.1 1.24, 1.23, 1.22, 1.21, 1.20 3.16.2 1.19.10†
v1.3.0 1.24, 1.23, 1.22, 1.21, 1.20 3.16.0 1.19.10†
v1.2.1 1.23, 1.22, 1.21, 1.20, 1.19 3.14.6 1.19.10†
v1.1.3 1.23, 1.22, 1.21, 1.20, 1.19 3.14.4 1.19.10†
v1.1.2 1.23, 1.22, 1.21, 1.20, 1.19 3.14.2 1.19.9†
v1.1.1 1.23, 1.22, 1.21, 1.20, 1.19 3.14.2 1.19.9†
v1.1.0 1.22, 1.21, 1.20, 1.19 3.14.2 1.19.9†
v1.0.5 1.22, 1.21, 1.20, 1.19 3.14.2 1.19.9†
v1.0.4 1.22, 1.21, 1.20, 1.19 3.14.2 1.19.9†
v1.0.3 1.22, 1.21, 1.20, 1.19 3.14.2 1.19.9†
v1.0.2 1.22, 1.21, 1.20, 1.19 3.14.2 1.19.9†
v1.0.1 1.22, 1.21, 1.20, 1.19 3.14.2 1.19.9†
v1.0.0 1.22, 1.21, 1.20, 1.19 3.13.5 1.20.1

我们选择兼容1.23的v1.6.4版本。

访问https://kubernetes.github.io/ingress-nginx/deploy/#quick-start

可以看到

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/cloud/deploy.yaml

我们改成

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/cloud/deploy.yaml

由于我们使用的是docker,所以docker里并没有k8s官方的镜像,所以我们必须手动下载deploy.yaml后手动修改,网址无法访问请留学。

2.安装deploy.yaml文件

首先需要让deploy.yaml文件能够正常安装

先修改镜像地址

在deploy.yaml文件内搜索**image:**可以看到有三个镜像地址,我们要把官方的地址替换:

替换地址为好心人搬运到dockerhub的anjia0532/google-containers.ingress-nginx.controller:v1.6.4anjia0532/google-containers.ingress-nginx.kube-webhook-certgen:v20220916-gd32f8c343

注意镜像版本,如果v1.6.4以外的版本请手动到dockerhub anjia0532搜索对应的镜像名称。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.6.4
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: anjia0532/google-containers.ingress-nginx.controller:v1.6.4 # 替换地址
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.6.4
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.6.4
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: anjia0532/google-containers.ingress-nginx.kube-webhook-certgen:v20220916-gd32f8c343 # 替换地址
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.6.4
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.6.4
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: anjia0532/google-containers.ingress-nginx.kube-webhook-certgen:v20220916-gd32f8c343 # 替换地址
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission

这样,这个deploy.yaml就可以正常安装了,可以尝试,但还要修改其他配置以实现高可用,修改配置后,先通过通过kubectl delete -f deploy.yaml卸载后在重新安装。

3.deploy.yaml nodeport方式对应的修改

deploy的默认方式是LoadBalancer,这个需要云服务厂商支持,我们还是采用NodePort方式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.6.4
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports: # 修改成NodePort模式对应的端口
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 32080 #http
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 32443 #https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort # 修改成NodePort模式

这样,部署后,单节点的ingress-nginx就完成了,之后可以通过宿主机ip:32080访问pod。

可以查询 ingress-nginx 命名空间下的 deployment、pod、service 资源

1
kubectl get deployment,pods,service -n ingress-nginx -o wide

下面是3个master节点都安装了ingress-nginx的情况,非DaemonSet模式下,pod/ingress-nginx-controller只会在一个非master节点上

1
2
3
4
5
6
7
8
9
10
11
NAME                                       READY   STATUS      RESTARTS      AGE     IP               NODE           NOMINATED NODE   READINESS GATES
pod/ingress-nginx-admission-create-rjhqp 0/1 Completed 0 2d12h 10.244.3.14 k8s-node-1 <none> <none>
pod/ingress-nginx-admission-patch-8lp8m 0/1 Completed 1 2d12h 10.244.3.13 k8s-node-1 <none> <none>
pod/ingress-nginx-controller-8484n 1/1 Running 2 (15h ago) 2d12h 192.168.32.129 k8s-master-1 <none> <none>
pod/ingress-nginx-controller-qrnxs 1/1 Running 0 2d12h 192.168.32.132 k8s-master-3 <none> <none>
pod/ingress-nginx-controller-t5qxw 1/1 Running 2 (15h ago) 2d12h 192.168.32.131 k8s-master-2 <none> <none>

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller NodePort 172.0.162.234 <none> 80:32080/TCP,443:32443/TCP 2d12h app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 172.0.76.43 <none> 443/TCP 2d12h app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

4.配置规则

首先将“一、应用准备”中的server文件改成非nodeport模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat > flask-rc-ingress.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: flask-app
name: flask-app
spec:
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
run: flask-app
status:
loadBalancer: {}
EOF

执行命令kubectl apply -f flask-rc-ingress.yaml

接下来配置ingress规则

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat > ingress-rule.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: flask-app
port:
number: 5000
EOF

多个path示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: flask-app
port:
number: 5000
- path: /web1 # 不同的path访问不同的service对应的pod
pathType: Prefix
backend:
service:
name: flask-app2 # 另一个service的名称
port:
number: 5000

执行命令kubectl apply -f ingress-rule.yaml

再执行kubectl describe ingress,可以看到flask-app的5000端口已经映射出去,ingress部署的宿主机ip:32080就可以访问flask-app了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Name:             test-ingress
Labels: <none>
Namespace: default
Address: 172.0.162.234
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/ flask-app:5000 (10.244.3.21:5000)
Annotations: kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 17m nginx-ingress-controller Scheduled for sync
Normal Sync 17m nginx-ingress-controller Scheduled for sync

注意大坑:

上面的rule我们把path定义成了”/“,如果想要使用同一个域名或ip,不同path跳到不同的pod,需要pod内的应用或接口支持,例如192.168.32.130:32080/web1实际上在flask-app:5000所对应的pod内请求localhost:5000/web1,ingress并不会帮你解析/web1这个path,只会原样的替你转发到对应的pod内请求。这个是很多文章都没有说的,只教你如何配置不同前缀去访问不同的pod。

5.高可用部署

以deploy方式部署了ingress后,我们已经可以通过指定的节点ip,比如192.168.32.130:32080访问pod应用,但如果内网ip为192.168.32.130的这个node节点宕机了,我们将无法使用ingress,只能通过集群内部ip访问了。而且如果我们拥有多个node节点,如果不指定特定的node节点,ingress部署在哪个节点是随机的,ip也要随之变动。

所以需要通过多个边缘节点+keepalived+daemonset方式部署ingress,以解决高可用和访问ip统一的问题。

边缘节点:打上标签的k8s节点,只负责运行ingress-nginx-controller。

daemonset方式:将ingress-nginx-controller部署在对应标签的所有节点,每个节点只运行ingress-nginx-controller一个pod。

关于VIP(虚拟ip) + keepalived 高可用配置,已经在”k8s部署流程”中详细介绍了。所以我们将ingress-nginx-controller部署在3台已经配置好高可用的master节点上,从而完成ingress的高可用部署(也可以另外部署3台节点重新配置高可用,与master分开)。

ingress部署:

给需要安装ingress的node打标签isIngress=true,例为k8s-master-1、k8s-master-2、k8s-master-3,kubectl get nodes –show-labels查看节点标签

1
2
3
4
5
kubectl label nodes k8s-master-1 isIngress=true

kubectl label nodes k8s-master-2 isIngress=true

kubectl label nodes k8s-master-3 isIngress=true

再次修改deploy.yaml文件(在”3.deploy.yaml nodeport方式对应的修改” 的基础上修改)

  1. 修改kind

    1
    kind: Deployment

    改成

    1
    kind: DaemonSet;
  2. 注释spec.replicas: 1

    1
    #spec.replicas: 1

    (最新deploy.yaml文件里并没有这一项,没有请忽略);

  3. 使用hostNetwork:true配置网络

    1
    hostNetwork: true

    使用hostNetwork:true配置网络,pod中运行的应用程序可以直接看到宿主主机的网络接口,宿主主机所在的局域网上所有网络接口都可以访问到该应用程序;

  4. 设置是使POD使用的k8s的dns

    1
    dnsPolicy: ClusterFirstWithHostNet

    该设置是使POD使用的k8s的dns,如果不加上dnsPolicy: ClusterFirstWithHostNet ,pod默认使用所在宿主主机使用的DNS,这样也会导致容器内不能通过service name 访问k8s集群中其他pod(按需添加,本文配置文件没加)

  5. node标签

    1
    2
    nodeSelector:
    isIngress: "true" #node标签
  6. 增加master节点容忍

    1
    2
    3
    tolerations: #增加master节点容忍
    - key: node-role.kubernetes.io/master
    effect: NoSchedule

DaemonSet配置(其余deploy.yaml内容保持不变):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
apiVersion: apps/v1
kind: DaemonSet # kind: Deployment改成DaemonSet
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.6.4
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: anjia0532/google-containers.ingress-nginx.controller:v1.6.4
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
isIngress: "true"
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission

这样,执行kubectl apply -f 后就可以完成ingress的高可用部署(重新部署请执行kubectl delete -f deploy.yaml卸载后再安装)

现在可以通过VIP:32080访问pod。

可以查询 ingress-nginx 命名空间下的 deployment、pod、service 资源

1
kubectl get deployment,pods,service -n ingress-nginx -o wide

可以看到3个master节点都安装了ingress-nginx

1
2
3
4
5
6
7
8
9
10
11
NAME                                       READY   STATUS      RESTARTS      AGE     IP               NODE           NOMINATED NODE   READINESS GATES
pod/ingress-nginx-admission-create-rjhqp 0/1 Completed 0 2d12h 10.244.3.14 k8s-node-1 <none> <none>
pod/ingress-nginx-admission-patch-8lp8m 0/1 Completed 1 2d12h 10.244.3.13 k8s-node-1 <none> <none>
pod/ingress-nginx-controller-8484n 1/1 Running 2 (15h ago) 2d12h 192.168.32.129 k8s-master-1 <none> <none>
pod/ingress-nginx-controller-qrnxs 1/1 Running 0 2d12h 192.168.32.132 k8s-master-3 <none> <none>
pod/ingress-nginx-controller-t5qxw 1/1 Running 2 (15h ago) 2d12h 192.168.32.131 k8s-master-2 <none> <none>

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller NodePort 172.0.162.234 <none> 80:32080/TCP,443:32443/TCP 2d12h app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 172.0.76.43 <none> 443/TCP 2d12h app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

6.打通公网

同一内网的公网服务器用nginx代理到VIP:port即可。

本文作者:凯~
本文链接:https://blog.diyultra.top/2023/04/01/k8sgongwangfanan/
版权声明:本文采用 CC BY-NC-SA 3.0 CN 协议进行许可
×