[TOC]
### **前置准备**
1、主机两台(kernel-4.19,centos-7.6)
* master1:192.168.92.102(fd92::102)
* master2:192.168.92.104(fd92::104)
* node1:192.168.92.106(fd92::106)
* vip:192.168.92.200(fd92::200)
2、设置内核参数
```
# kubeadm需要的参数
# https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
# 双栈需要,calico需要
# https://projectcalico.docs.tigera.io/networking/ipv6
# https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/dual-stack-support/#create-a-dual-stack-cluster
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.all.forwarding = 1
# 经验需要
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
```
3、安装docker、kubeadm、kubelet(v1.23)
4、安装好keepalived,让vip绑定在master1上
### **安装Master1**
根据[K8S文档](https://kubernetes.io/docs/concepts/services-networking/dual-stack/)的介绍,要支持双栈,K8S的各个组件的下列参数要配置为双栈:
* kube-apiserver
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
* kube-controller-manager
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>`
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
* `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6`defaults to /24 for IPv4 and /64 for IPv6
* kube-proxy
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>`
* kubelet
* `--node-ip=<IP>,<IP6>`
我们通过kubeadm-config文件来配置上面的参数。创建配置文件kubeadm-init-master.yaml,配置文件的格式参考[kubeadm-config.v1beta3](https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/)
```
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration # 初始Master节点的私有配置
bootstrapTokens: # 可以指定bootstrapToken,默认24小过期自动删除
- token: "9a08jv.c0izixklcxtmnze7"
description: "kubeadm bootstrap token"
ttl: "24h"
certificateKey: "e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204" # 可以指定certificateKey,默认两小时过期自动删除
localAPIEndpoint:
advertiseAddress: "192.168.92.102" # 控制平台通信使用ipv4
nodeRegistration:
name: master1
kubeletExtraArgs:
node-ip: 192.168.92.102,fd92::102 # 控制平台通信使用ipv4,把ipv4地址放前面
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration # 所有Master节点的公共配置
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.23.1
controlPlaneEndpoint: 192.168.92.200 # 控制平台我们使用ipv4
networking:
podSubnet: 172.26.0.0/16,172:26::/64 # ipv4放在前面,那么kubectl get node时显示的是ipv4地址
serviceSubnet: 10.96.0.0/16,10:96::/112 # ipv4放在前面,那么kubectl get service时显示的是ipv4地址
etcd:
local:
extralArgs:
listen-metrics-urls: http://[::]:2381 # 同时监听ipv4与ipv6
apiServer:
certSANs: ["192.168.92.200", "fd92::200"]
extraArgs:
service-cluster-ip-range: 10.96.0.0/16,10:96::/112
bind-address: "::"
secure-port: "6443"
scheduler:
extraArgs:
bind-address: "::"
controllerManager:
extraArgs:
bind-address: "::"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
imagefs.available: 5%
memory.available: 5%
nodefs.available: 5%
nodefs.inodesFree: 5%
healthzBindAddress: "::"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: "::"
clusterCIDR: "172.26.0.0/16,172:26::/64" # Pod的地址范围
mode: "iptables"
```
需要注意,上面的`advertiseAddress`就是kube-apiserver的`--advertise-address`,它只能配置为单栈。参考[此文](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/dual-stack-support/#create-a-dual-stack-cluster)。
执行命令
```
$ kubeadm init --config kubeadm-init-master.yaml --upload-certs
```
如果成功的话,会输出如下的内容,告诉你如何添加Master或Node节点:
```
...
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.92.200:6443 --token 9a08jv.c0izixklcxtmnze7 \
--discovery-token-ca-cert-hash sha256:3814ea70b6e664b88bb0f6f5f40b83f0aaf2d24fa8db3176b7c62b4880f9eb35 \
--control-plane --certificate-key e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.92.200:6443 --token 9a08jv.c0izixklcxtmnze7 \
--discovery-token-ca-cert-hash sha256:3814ea70b6e664b88bb0f6f5f40b83f0aaf2d24fa8db3176b7c62b4880f9eb35
```
### **安装Master2**
上面的kubeadm init命令会有输出,告诉你怎么添加Master和Node。如果执行以下的命令发现certificate-key过期了(即没有了kubeadm-certs的token):
```
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
...
r1gz60.ozrpihdqv6g6mmyd 1h 2022-02-21T13:46:43Z <none> Proxy for managing TTL for the kubeadm-certs secret <none>
```
那么可以使用如下的命令重新上传证书,并且使用原来的key进行加密:
```
$ kubeadm init phase upload-certs --certificate-key e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204
```
如果bootstrapToken过期了,则使用下面的命令重新创建,重新创建时我们还是使用原来的token
```
$ kubeadm token create 9a08jv.c0izixklcxtmnze7
```
根据上面的输出,我们在Master2上先创建如下的kubeadm-join-master.yaml文件:
```
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
controlPlane:
certificateKey: e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204
localAPIEndpoint:
advertiseAddress: 192.168.92.104
discovery:
bootstrapToken:
apiServerEndpoint: 192.168.92.200:6443
token: "9a08jv.c0izixklcxtmnze7"
caCertHashes:
- "sha256:3814ea70b6e664b88bb0f6f5f40b83f0aaf2d24fa8db3176b7c62b4880f9eb35"
nodeRegistration:
name: master2
kubeletExtraArgs:
node-ip: 192.168.92.104,fd92::104
```
然后执行命令安装
```
$ kubeadm join --config kubeadm-join-master.yaml
```
### **安装Node1**
在Node1上创建如下的kubeadm-join-node.yaml文件(和kubeadm-join-master.yaml相比,它就是少了control-plane部分)
```
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "192.168.92.200:6443"
token: 9a08jv.c0izixklcxtmnze7
caCertHashes:
- sha256:5b48c790dcf116d0e5da83ec972b223b3eff8789b82d2762b353e1319128a7cd
nodeRegistration:
name: node1
kubeletExtraArgs:
node-ip: 192.168.92.106,fd92::106
```
然后执行命令:
```
$ kubedm join --config kubeadm-join-node.yaml
```
### **安装网络插件**
注意:[Calico的overlay网络(IPIP和VXLAN)只支持IPv4,不支持IPv6](https://projectcalico.docs.tigera.io/networking/vxlan-ipip#ipv46-address-support)
首先下载calico的yaml文件。根据calico的介绍,当集群规模大于50节点时,应该使用calico-typha.yaml。calico-typha的作用主要是:[减轻Apiserver的压力](https://projectcalico.docs.tigera.io/getting-started/kubernetes/hardway/install-typha);因为各节点的Felix都会监听Apiserver,当节点数众多时,Apiserver的Watch压力会很大;当安装了calico-typha后,Felix不监听Apiserver,而是由calico-typha监听Apiserver,然后calico-typha再和Felix进行通信。
这里我们下载最新版本[v3.21的calico-typha.yaml](https://docs.projectcalico.org/v3.21/manifests/calico-typha.yaml)文件。然后根据[Eanble Dual Stack](https://projectcalico.docs.tigera.io/archive/v3.21/networking/ipv6#enable-dual-stack)的指引,进行修改:
1、配置IPAM,`assign_ipv4`和`assign_ipv6`都设置为true
```
"ipam": {
"type": "calico-ipam",
"assign_ipv4": "true",
"assign_ipv6": "true"
},
```
2、在calico-node这个damonset的calico-node容器中,设置如下两个环境变量:
| Variable name | Value |
| --- | --- |
| `IP6` | `autodetect` |
| `FELIX_IPV6SUPPORT` | `true` |
注意`IP6`环境变量不存在,需要新增;`FELIX_IPV6SUPPORT`环境变量已存在,修改它的值从false到true即可。如下:
```
- name: FELIX_IPV6SUPPORT
value: "true"
- name: IP6
value: "autodetect"
```
3、更改IPIP的模式,把默认的“Always”改成“CrossSubnet”。注意:IPv4支持IPIP或Vxlan,默认使用ipip的Always模式。另外,IPv6不支持ipip与vxlan(所以跨子网之间的节点的容器无法通过IPv6进行通信)。
4、另外这个文件中, 有两个PodDisruptionBudget对象,需要把这两个对象的apiVersion从`policy/v1beta1`改成`policy/v1`
5、执行命令进行安装
```
$ kubectl apply -f calico-typha.yaml
```
### **附录一:Join Master或Node不使用--config文件**
上面我们在join Master或Node时,使用了config文件。接下来我们介绍一下kubeadm join时,直接使用参数。
1、添加Master
由于kubeadm join命令不支持`--node-ip`参数,所以第一步我们要先在Master2上创建文件`/etc/sysconfig/kubelet`,内容如下:
```
KUBELET_EXTRA_ARGS="--node-ip 192.168.92.104,fd92::104"
```
然后执行以下命令(比kubeadm init的输出中多了一个`--node-name`参数,也可以把这个参数放在上面的KUBELET_EXTRA_ARGS中):
```
$ kubeadm join 192.168.92.200:6443 --token 9a08jv.c0izixklcxtmnze7 \
--discovery-token-ca-cert-hash sha256:3814ea70b6e664b88bb0f6f5f40b83f0aaf2d24fa8db3176b7c62b4880f9eb35 \
--control-plane --certificate-key e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204 \
--node-name master2
```
2、添加Node
与添加Master一样,先创建文件`/etc/sysconfig/kubelet`,然后写入`--node-ip`参数,然后执行命令添加:
```
$ kubeadm join 192.168.92.200:6443 --token 9a08jv.c0izixklcxtmnze7 \
--discovery-token-ca-cert-hash sha256:3814ea70b6e664b88bb0f6f5f40b83f0aaf2d24fa8db3176b7c62b4880f9eb35 \
--node-name master2
```
### **FAQ**
**Q:Apiserver的`--advertise-address`参数只能配置单栈,会有什么影响?**
A:猜测应该是影响kubernetes这个service。即在容器里面通过`https://kubernetes:443`访问apiserver时,最终转发到的是apiserver的这个地址。
### **参考**
* https://kubernetes.io/docs/concepts/services-networking/dual-stack/
* https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/dual-stack-support/
* https://kubernetes.io/docs/tasks/network/validate-dual-stack/
* https://projectcalico.docs.tigera.io/networking/ipv6#enable-dual-stack
- 常用命令
- 安装
- 安装Kubeadm
- 安装单Master集群
- 安装高可用集群(手动分发证书)
- 安装高可用集群(自动分发证书)
- 启动参数解析
- certificate-key
- ETCD相关参数
- Kubernetes端口汇总
- 安装IPv4-IPv6双栈集群
- 下载二进制文件
- 使用Kata容器
- 快速安装shell脚本
- 存储
- 实践
- Ceph-RBD实践
- CephFS实践
- 对象存储
- 阿里云CSI
- CSI
- 安全
- 认证与授权
- 认证
- 认证-实践
- 授权
- ServiceAccount
- NodeAuthorizor
- TLS bootstrapping
- Kubelet的认证
- 准入控制
- 准入控制示例
- Pod安全上下文
- Selinux-Seccomp-Capabilities
- 给容器配置安全上下文
- PodSecurityPolicy
- K8S-1.8手动开启认证与授权
- Helm
- Helm命令
- Chart
- 快速入门
- 内置对象
- 模板函数与管道
- 模板函数列表
- 流程控制
- Chart依赖
- Repository
- 开源的Chart包
- CRD
- CRD入门
- 工作负载
- Pod
- Pod的重启策略
- Container
- 探针
- 工作负载的状态
- 有状态服务
- 网络插件
- Multus
- Calico+Flannel
- 容器网络限速
- 自研网络插件
- 设计文档
- Cilium
- 安装Cilium
- Calico
- Calico-FAQ
- IPAM
- Whereabouts
- 控制平面与Pod网络分开
- 重新编译
- 编译kubeadm
- 编译kubeadm-1.23
- 资源预留
- 资源预留简介
- imagefs与nodefs
- 资源预留 vs 驱逐 vs OOM
- 负载均衡
- 灰度与蓝绿
- Ingress的TLS
- 多个NginxIngressController实例
- Service的会话亲和
- CNI实践
- CNI规范
- 使用cnitool模拟调用
- CNI快速入门
- 性能测试
- 性能测试简介
- 制作kubemark镜像
- 使用clusterloader2进行性能测试
- 编译clusterloader2二进制文件
- 搭建性能测试环境
- 运行density测试
- 运行load测试
- 参数调优
- Measurement
- TestMetrics
- EtcdMetrics
- SLOMeasurement
- PrometheusMeasurement
- APIResponsivenessPrometheus
- PodStartupLatency
- FAQ
- 调度
- 亲和性与反亲和性
- GPU
- HPA
- 命名规范
- 可信云认证
- 磁盘限速
- Virtual-kubelet
- VK思路整理
- Kubebuilder
- FAQ
- 阿里云日志服务SLS
