企业🤖AI Agent构建引擎,智能编排和调试,一键部署,支持私有化部署方案 广告
[TOC] # 负载均衡服务器 非集群节点上安装以下的服务 ## 下载docker-compose ```shell curl -L https://get.daocloud.io/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose ``` ## 安装nginx **创建目录** ```shell mkdir -p /etc/ngxin/{conf.d,stream} ``` **nginx主配置** ```shell cat <<-"EOF" | sudo tee /etc/nginx/nginx.conf > /dev/null user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } stream { log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol ' '"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"'; include /etc/nginx/stream/*.conf; } EOF ``` **四层代理apiserver服务** ```shell cat <<-"EOF" | sudo tee /etc/nginx/stream/apiserver.conf > /dev/null upstream apiserver { server 192.168.31.103:6443 max_fails=3 fail_timeout=5s; server 192.168.31.79:6443 max_fails=3 fail_timeout=5s; server { listen 6443; # proxy_protocol on; proxy_pass apiserver; access_log /var/log/nginx/apiserver_tcp_access.log proxy; error_log /var/log/nginx/apiserver_tcp_error.log; } EOF ``` > 注意:修改server替换成实际的 master节点 IP地址 **docker-compose配置** ```shell $ cat <<-EOF | sudo tee /etc/nginx/docker-compose.yaml > /dev/null version: "3" services: nginx: container_name: nginx image: nginx:1.21-alpine volumes: - "./stream:/etc/nginx/stream:ro" - "./conf.d:/etc/nginx/conf.d:ro" - "./nginx.conf:/etc/nginx/nginx.conf:ro" - "./logs:/var/log/nginx" - "/usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro" restart: always network_mode: "host" EOF ``` **启动nginx** ```shell docker-compose -f /etc/nginx/docker-compose.yaml up -d ``` ## 安装keepalived **配置keepalived** ```shell $ mkdir /etc/keepalived $ cat <<-EOF | sudo tee /etc/keepalived/keepalived.conf > /dev/null ! Configuration File for keepalived global_defs { max_auto_priority -1 enable_script_security vrrp_skip_check_adv_addr } include /etc/keepalived/keepalived_apiserver.conf EOF $ cat <<-EOF | sudo tee /etc/keepalived/keepalived_apiserver.conf > /dev/null vrrp_script apiserver { # 检测脚本路径 script "/etc/keepalived/chk_apiserver.sh" # 执行检测脚本的内置用户 user keepalived # 脚本调用之间的秒数 interval 1 # 转换失败所需的次数 fall 5 # 转换成功所需的次数 rise 3 # 按此权重调整优先级 weight -50 } # 如果多个 vrrp_instance,切记名称不可以重复。包含上面的 include 其他子路径 vrrp_instance apiserver { # 状态是主节点还是从节点 state MASTER # inside_network 的接口,由 vrrp 绑定。 interface eth0 # 虚拟路由id,根据该id进行组成主从架构 virtual_router_id 100 # 初始优先级 # 最后优先级权重计算方法 # (1) weight 为正数,priority - weight # (2) weight 为负数,priority + weight priority 200 # 加入集群的认证 authentication { auth_type PASS auth_pass pwd100 } # keepalivd配置成单播模式 ## 单播的源地址 unicast_src_ip 192.168.31.103 ## 单播的对端地址 unicast_peer { 192.168.31.79 } # vip 地址 virtual_ipaddress { 192.168.31.100 } # 健康检查脚本 track_script { apiserver } } EOF ``` **keepalived检测脚本** ```shell $ cat <<-EOF | sudo tee /etc/keepalived/chk_apiserver.sh > /dev/null #!/bin/sh count=\$(netstat -lntup | egrep ':6443' | wc -l) if [ "\$count" -ge 1 ];then # 退出状态为0,代表检查成功 exit 0 else # 退出状态为1,代表检查不成功 exit 1 fi EOF $ chmod +x /etc/keepalived/chk_apiserver.sh ``` **docker-compose文件** ```shell $ cat <<-EOF | sudo tee /etc/keepalived/docker-compose.yaml > /dev/null version: "3" services: keepalived: container_name: keepalived image: jiaxzeng/keepalived:2.2.7-alpine3.16 volumes: - "/usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro" - ".:/etc/keepalived" cap_add: - NET_ADMIN network_mode: "host" restart: always EOF ``` **启动keepalived** ```shell $ docker-compose -f /etc/keepalived/docker-compose.yaml up -d ``` # 安装master相关服务 ## 安装kube-apiserver **创建相关目录** ```shell mkdir -p /etc/kubernetes/conf mkdir -p /var/log/kubernetes/kube-apiserver ``` **获取相关证书** ```shell scp -r k8s-master01:/etc/kubernetes/pki /etc/kubernetes/ ``` **验证apiserver证书是否可用** ```shell MASTER_VIP=192.168.31.100 netcar=`ip r | awk '/default via/ {print $5}'` [ ! -z $netcar ] && MASTER02_IP=`ip r | awk -v netcar=$netcar '{if($3==netcar) print $9}'` || echo '$netcar is null' openssl x509 -noout -in /etc/kubernetes/pki/apiserver.crt -checkip $MASTER02_IP | grep NOT openssl x509 -noout -in /etc/kubernetes/pki/apiserver.crt -checkip $MASTER_VIP | grep NOT ``` > **注意**:如果没有任何输出,则apiserver证书可以使用。输出 `does NOT match certificate` 字眼,则apiserver证书需要重新生成,参考[《二进制安装基础组件》](./install_binaries_kubernetes.md)文章中 安装kube-apiserver 的 生成服务证书(apiserver服务使用的证书) 重新制作证书 **拷贝命令** ```shell scp k8s-master01:/usr/local/bin/{kube-apiserver,kubectl} /usr/local/bin ``` **获取审计配置文件** ```shell scp k8s-master01:/etc/kubernetes/conf/kube-apiserver-audit.yml /etc/kubernetes/conf/ ``` **创建kube-apiserver的systemd模板** ```shell scp k8s-master01:/usr/lib/systemd/system/kube-apiserver.service /usr/lib/systemd/system/ ``` **启动kube-apiserver** ```shell systemctl daemon-reload systemctl enable kube-apiserver.service --now ``` **验证** ```shell curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver.crt --key /etc/kubernetes/pki/apiserver.key https://localhost:6443/healthz && echo ``` ## 安装kube-controller-manager **创建日志目录** ```shell mkdir /var/log/kubernetes/kube-controller-manager ``` **拷贝命令** ```shell scp k8s-master01:/usr/local/bin/kube-controller-manager /usr/local/bin ``` **生成连接集群的kubeconfig文件** ```shell scp k8s-master01:/etc/kubernetes/controller-manager.conf /etc/kubernetes/ MASTER_VIP=192.168.31.100 PORT=6443 sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/controller-manager.conf ``` **kube-controller-manager的systemd模板** ```shell scp k8s-master01:/usr/lib/systemd/system/kube-controller-manager.service /usr/lib/systemd/system ``` **启动kube-controller-manager** ```shell systemctl daemon-reload systemctl enable kube-controller-manager.service --now ``` **验证** ```shell curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/controller-manager.crt --key /etc/kubernetes/pki/controller-manager.key https://localhost:10257/healthz && echo ``` ## 安装kube-scheduler **创建日志目录** ```shell mkdir /var/log/kubernetes/kube-scheduler ``` **拷贝命令** ```shell scp k8s-master01:/usr/local/bin/kube-scheduler /usr/local/bin ``` **生成连接集群的kubeconfig文件** ```shell scp k8s-master01:/etc/kubernetes/scheduler.conf /etc/kubernetes/ MASTER_VIP=192.168.31.100 PORT=6443 sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/scheduler.conf ``` **创建kube-scheduler的systemd模板** ```shell scp k8s-master01:/usr/lib/systemd/system/kube-scheduler.service /usr/lib/systemd/system ``` **启动kube-scheduler** ```shell systemctl daemon-reload systemctl enable kube-scheduler.service --now ``` **验证** ```shell curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/scheduler.crt --key /etc/kubernetes/pki/scheduler.key https://localhost:10259/healthz && echo ``` ## 获取客户端设置 ```shell scp k8s-master01:/etc/kubernetes/admin.conf /etc/kubernetes/ MASTER_VIP=192.168.31.100 PORT=6443 sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/admin.conf mkdir ~/.kube cp /etc/kubernetes/admin.conf ~/.kube/config ``` # 安装node相关服务 参考 [<添加工作节点>](./k8s_add_work_node.md) 文章 # 设置master节点不可调度 ```shell # 将节点标记为master节点 kubectl label node 192.168.31.79 node-role.kubernetes.io/master="" # 将role为master节点,设置不可调度 kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master="":NoSchedule --overwrite ``` # 修改原有节点的配置文件 ## 相关master服务配置 **修改配置文件** ```shell MASTER_VIP=192.168.31.100 PORT=6443 sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/controller-manager.conf sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/scheduler.conf sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/admin.conf sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' ~/.kube/config ``` **重启master服务** ```shell systemctl restart kube-apiserver kube-controller-manager kube-scheduler ``` **验证** ```shell curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver.crt --key /etc/kubernetes/pki/apiserver.key https://localhost:6443/healthz && echo curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/controller-manager.crt --key /etc/kubernetes/pki/controller-manager.key https://localhost:10257/healthz && echo curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/scheduler.crt --key /etc/kubernetes/pki/scheduler.key https://localhost:10259/healthz && echo ``` ## 相关node服务 **修改配置文件** ```shell sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/kubelet.conf sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/proxy.conf ``` **重启node服务** ```shell systemctl restart kubelet kube-proxy ``` **验证** ```shell # kubelet curl http://localhost:10248/healthz && echo # kube-proxy curl http://localhost:10249/healthz && echo ``` # 附加iptables规则 ```shell # haproxy iptables -t filter -I INPUT -p tcp --dport 6443 -m comment --comment "k8s vip ports" -j ACCEPT # keepalived心跳,如果keepalvied是单播模式可以不需要该规则 iptables -t filter -I INPUT -p vrrp -s 192.168.31.0/24 -d 224.0.0.18 -m comment --comment "keepalived Heartbeat" -j ACCEPT ```