**master 节点包含3个组件**:
**kube-apiserver**:提供集群管理的REST API接口,包括认证授权、数据校验以及集群状态变更等,只有API Server才直接操作etcd,其他模块通过API Server查询或修改数据,提供其他模块之间的数据交互和通信的枢纽。
**kube-scheduler**:负责分配调度Pod到集群内的node节点,监听kube-apiserver,查询还未分配Node的Pod根据调度策略为这些Pod分配节点。
**kube-controller-manager**:由一系列的控制器组成,它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。
kube-apiserver为无状态服务,可以同时运行多个,通过keepalived或LB负载做高可用,k8s的高可用主要是指api服务的高可用。
集群内同时只有一个活动的 kube-secheduler、kube-controller-manager,如果运行多个,会通过选举产生一个 leader,并将leader记录在etcd中。
**部署2台master节点**
**创建 kube-apiserver 的 systemd**
~~~
vim /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ETCD_SERVERS \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
~~~
**kube-apiserver配置文件**
~~~
vim /etc/kubernetes/apiserver
###
# kubernetes system config
# The following values are used to configure the kube-apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=192.168.50.101 --bind-address=192.168.50.101"
# The port on the local server to listen on.
KUBE_API_PORT="--secure-port=6443"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.50.101:2379,https://192.168.50.102:2379,https://192.168.50.1:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
# Add your own!
KUBE_API_ARGS="--anonymous-auth=false \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--authorization-mode=Node,RBAC \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--enable-bootstrap-token-auth \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/etcd.pem \
--etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \
--event-ttl=1h \
--enable-swagger-ui=true \
--kubelet-https=true \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-node-port-range=30000-50000 \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--token-auth-file=/etc/kubernetes/token.csv \
--logtostderr=true \
--v=2"
~~~
详细参数配置请参考[官网文档](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
**以下参数需要改动:**
--advertise-address=改成本机ip,用此地址作为集群成员;
--bind-address=改成本机ip,监听https的地址;
--secure-port=默认6443端口;
--etcd-servers=改成3台etcd的地址;
**以下参数不需要改动:**
--enable-admission-plugins:对 apiserver 访问需要依次经过认证、授权和准入控制(admission controll),认证是TLS加密网络传输,授权是给用户赋予权限,Admission Control则是资源管理方面的作用,这个配置K8S每个版本不一样:[查看地址](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use);
--authorization-mode=RBAC 指定在安全端口使用 RBAC 授权模式;
--anonymous-auth=是否开启对于https端口的匿名访问;
--apiserver-count=apiserver运用在集群内的数量;
--service-cluster-ip-range 指定 Service Cluster IP 地址段;
--kubelet-https=true kubelet作为服务端时启用https,默认监听10250端口;
--enable-swagger-ui=true开启swagger-ui,访问地址api的ip/swagger-ui;
注:api服务启动后会默认监听127.0.0.1上的8080端口,提供本机服务通过非安全端口访问api,以前版本需要通过--insecure-bind-address参数配置,kube-controller-manager 和 kube-scheduler就通过本机的127.0.0.1:8080访问api服务;
k8s数据默认保存在etcd的 etcd /registry 路径下;
**启动和查看kube-apiserver服务**
~~~
systemctl daemon-reload
systemctl enable kube-apiserver && systemctl start kube-apiserver
systemctl status kube-apiserver #查看服务是否启动成功
journalctl -u kube-apiserver #查看日志是否有报错
~~~
**创建kube-controller-manager的systemd**
~~~
vim /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
~~~
**kube-controller-manager配置文件**
~~~
vim /etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--bind-address=127.0.0.1 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-cluster-ip-range=10.254.0.0/16 \
--logtostderr=true \
--v=2"
~~~
--bind-address:作为服务监听的ip,默认端口10252,但其他组件不会主动访问它,填写127.0.0.1即可;
--service-cluster-ip-range 参数指定 Cluster 中 Service 的 CIDR 范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致;
--cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
--root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在 Pod 容器的 ServiceAccount 中放置该 CA 证书文件 ;
--leader-elect=true 让该节点有选举master leader 的功能;
**启动和查看kube-controller-manager服务**
~~~
systemctl daemon-reload
systemctl enable kube-controller-manager && systemctl start kube-controller-manager
systemctl status kube-controller-manager #查看服务是否启动成功
journalctl -u kube-controller-manager #查看日志是否有报错
~~~
**创建kube-scheduler的systemd**
~~~
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
~~~
**kube-scheduler配置文件**
~~~
vim /etc/kubernetes/scheduler
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--logtostderr=true \
--v=2"
~~~
--address:scheduler本身提供访问端口10251,但其他组件不会主动访问它,填写127.0.0.1即可;
--leader-elect=true 让节点拥有选举leader的功能;
**启动和查看kube-scheduler服务**
~~~
systemctl daemon-reload
systemctl enable kube-scheduler && systemctl start kube-scheduler
systemctl status kube-scheduler #查看服务是否启动成功
journalctl -u kube-scheduler #查看日志是否有报错
~~~
**检查集群**
~~~
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
~~~
**服务高可用性**
1,controller-manager 和 scheduler连接的是自身节点127.0.0.1:8080,一旦其中一台master的api服务挂掉,并且controller-manager 和 scheduler 主节点在此master上,那么这两个组件无法连接当前节点的api,会重新进行主节点选举,2个服务转移到另外一台mster上后,连接另外一台节点的api。
2,问题根源
一:阿里云的SLB 4层负载不支持节点访问自身slb,所以运行在master上的controller-manager 和 scheduler没办法访问slb去连接api,如果访问slb负载到自身节点网络就不通了,创建SLB时有提示;

二:使用SLB 7层负载可以解决不能访问自身slb问题,但7层负载不支持后端节点是https,api是使用https的6443端口提供服务的,而非安全端口8080只能暴扣在127.0.0.1上,没办法做负载。

- 部署介绍
- 一,系统初始化操作
- 二,创建TLS证书
- 三,创建kubeconfig文件
- 四,安装etcd集群
- 五,部署master节点
- 六,部署node节点
- 附,新增node节点
- 七,网络插件calico
- 八,安装DNS组件
- 九,安装dashboard
- 十,服务发布nginx-ingress
- 十一,prometheus监控部署
- 十二,prometheus自定义监控和报警
- 十三,Harbor私有仓库
- 十四,NFS数据持久化
- 其他
- linux相关文档
- centos7.4搭建openvpn
- docker-compose搭建ldap
- docker-compose搭建openvpn
- docker-compose搭建superset
- docker-compose搭建jenkins
