[TOC]
### **Linux**
---
##### **0、准备工作**
首先在linux上安装好git、go、并且设置好GOPATH、GOBIN等环境变量。
为了顺利拉取go依赖包,最好设置代理:
```
$ go env -w GOPROXY="https://goproxy.cn,direct"
```
另外,还要安装gcc,否则会出现这个[问题](https://github.com/kubernetes-sigs/kubebuilder/issues/2828)
##### **1、安装kubebuilder**
在 [github](https://github.com/kubernetes-sigs/kubebuilder/releases) 上下载最新的二进制文件,这里我们下载最新的 [kubebuilder_linux_amd64](https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.9.1/kubebuilder_linux_amd64)。然后把它放到PATH路径下,重命名为kubebuilder,并添加可执行权限,然后查看版本信息:
```
$ kubebuilder version
Version: main.version{KubeBuilderVersion:"3.9.1", KubernetesVendor:"1.26.0", GitCommit:"cbccafa75d58bf6ac84c2f5d34ad045980f551be", BuildDate:"2023-03-08T21:23:07Z", GoOs:"linux", GoArch:"amd64"}
```
##### **2、创建cubevk-operator工程**
这里我们从github克隆初始化的仓库(注意此时仓库下只有一个README.md文件,不要出现其他目录)
```
$ git clone -b develop https://gitlab.ctyun.cn/ctg-dcos/cubevk-operator.git
```
接着我们先初始化该项目:
```
$ go mod init gitlab.ctyun.cn/ctg-dcos/cubevk-operator
```
执行如下命令进行初始化(注意:下面只执行了第一条命令,其他的命令都是自动调用的)
```
$ kubebuilder init --domain ctyun.cn
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.14.1
go: downloading sigs.k8s.io/controller-runtime v0.14.1
...
go: downloading github.com/josharian/intern v1.0.0
Update dependencies:
$ go mod tidy
go: downloading github.com/stretchr/testify v1.8.0
...
go: downloading github.com/benbjohnson/clock v1.1.0
Next: define a resource with:
$ kubebuilder create api
```
我们来看一下文件变化:
```
$ tree
.
├── config
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ └── manager_config_patch.yaml
│ ├── manager
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ └── rbac
│ ├── auth_proxy_client_clusterrole.yaml
│ ├── auth_proxy_role_binding.yaml
│ ├── auth_proxy_role.yaml
│ ├── auth_proxy_service.yaml
│ ├── kustomization.yaml
│ ├── leader_election_role_binding.yaml
│ ├── leader_election_role.yaml
│ ├── role_binding.yaml
│ └── service_account.yaml
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
├── main.go
├── Makefile
├── PROJECT
└── README.md
6 directories, 24 files
```
##### **3、创建API,生成初始CRD**
```
$ kubebuilder create api --group ccse --version v1 --kind CubeVK
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
api/v1/cubevk_types.go
controllers/cubevk_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
mkdir -p /root/cubevk-operator/bin
test -s /root/cubevk-operator/bin/controller-gen && /root/cubevk-operator/bin/controller-gen --version | grep -q v0.11.1 || \
GOBIN=/root/cubevk-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/root/cubevk-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests
```
此时,目录结构如下:
```
$ tree
.
├── api
│ └── v1
│ ├── cubevk_types.go
│ ├── groupversion_info.go
│ └── zz_generated.deepcopy.go
├── bin
│ └── controller-gen
├── config
│ ├── crd
│ │ ├── kustomization.yaml
│ │ ├── kustomizeconfig.yaml
│ │ └── patches
│ │ ├── cainjection_in_cubevks.yaml
│ │ └── webhook_in_cubevks.yaml
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ └── manager_config_patch.yaml
│ ├── manager
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ ├── rbac
│ │ ├── auth_proxy_client_clusterrole.yaml
│ │ ├── auth_proxy_role_binding.yaml
│ │ ├── auth_proxy_role.yaml
│ │ ├── auth_proxy_service.yaml
│ │ ├── cubevk_editor_role.yaml
│ │ ├── cubevk_viewer_role.yaml
│ │ ├── kustomization.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── role_binding.yaml
│ │ └── service_account.yaml
│ └── samples
│ └── ccse_v1_cubevk.yaml
├── controllers
│ ├── cubevk_controller.go
│ └── suite_test.go
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
├── main.go
├── Makefile
├── PROJECT
└── README.md
```
我们需要更改的文件就两个,`/api/v1/cubevk_types.go`与`/controllers/cubevk_controller.go`。
types文件对应CRD的数据结构定义,初始内容如下:
```
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
// CubeVKSpec defines the desired state of CubeVK
type CubeVKSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Foo is an example field of CubeVK. Edit cubevk_types.go to remove/update
Foo string `json:"foo,omitempty"`
}
// CubeVKStatus defines the observed state of CubeVK
type CubeVKStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// CubeVK is the Schema for the cubevks API
type CubeVK struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CubeVKSpec `json:"spec,omitempty"`
Status CubeVKStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// CubeVKList contains a list of CubeVK
type CubeVKList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CubeVK `json:"items"`
}
func init() {
SchemeBuilder.Register(&CubeVK{}, &CubeVKList{})
}
```
controller文件则需要我们来开发完成CR的协调(Reconcile)逻辑,初始文件内容如下:
```
package controllers
import (
"context"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
ccsev1 "gitlab.ctyun.cn/ctg-dcos/cubevk-operator/api/v1"
)
// CubeVKReconciler reconciles a CubeVK object
type CubeVKReconciler struct {
client.Client
Scheme *runtime.Scheme
}
//+kubebuilder:rbac:groups=ccse.ctyun.cn,resources=cubevks,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=ccse.ctyun.cn,resources=cubevks/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=ccse.ctyun.cn,resources=cubevks/finalizers,verbs=update
// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the CubeVK object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.14.1/pkg/reconcile
func (r *CubeVKReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// TODO(user): your logic here
return ctrl.Result{}, nil
}
// SetupWithManager sets up the controller with the Manager.
func (r *CubeVKReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&ccsev1.CubeVK{}).
Complete(r)
}
```
我们也可以看一下自动生成的初始的CR文件`config/samples/ccse_v1_cubevk.yaml`的内容,如下:
```
apiVersion: ccse.ctyun.cn/v1
kind: CubeVK
metadata:
labels:
app.kubernetes.io/name: cubevk
app.kubernetes.io/instance: cubevk-sample
app.kubernetes.io/part-of: cubevk-operator
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: cubevk-operator
name: cubevk-sample
spec:
# TODO(user): Add fields here
```
##### **4、完成types**
我们修改
##### **x、总结**
- git clone
- go mod init gitlab.ctyun.cn/ctg-dcos/cubevk-operator
- kubebuilder init --domain ctyun.cn
- kubebuilder create api --group ccse --version v1 --kind CubeVK
- 编辑`api/v1/cubevk_types.go`,然后 `make manifests` 生成CRD文件与CR文件
- make install:安装CRD到k8s,此阶段会安装kustomize到bin目录上,为防止出错,可以先手动安装(make uninstall:删除CRD)
- 更改`config/rbac/role.yaml`,添加对deployment的所有权限
- 编辑controller
- make build # 构建二进制文件controller
- make docker-build # 构建镜像
- make docker-push # 推送镜像
- make deploy:安装controller到k8s (make undeploy:删除controller)
### **Windows**
---
##### **1、安装kubebuilder**
由于`github.com/kubernetes-sigs/kubebuilder`项目的go.mod文件中,module为`sigs.k8s.io/kubebuilder/v3`,所以pkg路径为下面这个。另外,go install 只能安装main package:
```
$ go install -v sigs.k8s.io/kubebuilder/v3/cmd@v3.9.1
```
上面的命令会把kubebuilder安装到`$GOBIN/`目录下(一般为`$GOPATH/bin`,文件名为cmd.exe,我们需要重命名为kubebuilder.exe。
另外,请确保`$GOPATH/bin`在path目录下
##### **2、安装controller-gen**
根据linux的中日志,执行以下命令安装controller-gen.exe到GOBIN目录下
```
$ go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
```
##### **3、安装cygwin**
安装cygwin(注意安装好make、test命令)
##### **4、按照Linux教程完成**
然后,在cygwin中,按照linux的教程执行命令。当执行完`kubebuilder init`之后,需要改一下Makefile,找到如下内容,改动如下:
1、去掉manifests与generate对controller-gen的依赖:
2、`$(CONTROLLER_GEN)`改成`controller-gen`
3、把`hack\\boilerplate.go.txt`改成`hack/boilerplate.go.txt`
```
.PHONY: manifests
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
.PHONY: generate
generate: controller-gen ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
$(CONTROLLER_GEN) object:headerFile="hack\\boilerplate.go.txt" paths="./..."
```
改成如下:
```
.PHONY: manifests
manifests: ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
.PHONY: generate
generate: ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
```
结果证明此方法还是不行,生成不了`api/types/zz_generated.go`
### **FAQ**
1、在uos上,kubebuilder init时,报如下的错?
```
go get https://goproxy.cn/xxxx/xxx/xxx dial tcp 150.138.110.41:443 socket operation not permitted
```
A:文件句柄数设置大小,设大后未再出现
### **参考**
---
* https://tonybai.com/2022/08/15/developing-kubernetes-operators-in-go-part1/
* https://book.kubebuilder.io/quick-start.html#installation
- 常用命令
- 安装
- 安装Kubeadm
- 安装单Master集群
- 安装高可用集群(手动分发证书)
- 安装高可用集群(自动分发证书)
- 启动参数解析
- certificate-key
- ETCD相关参数
- Kubernetes端口汇总
- 安装IPv4-IPv6双栈集群
- 下载二进制文件
- 使用Kata容器
- 快速安装shell脚本
- 存储
- 实践
- Ceph-RBD实践
- CephFS实践
- 对象存储
- 阿里云CSI
- CSI
- 安全
- 认证与授权
- 认证
- 认证-实践
- 授权
- ServiceAccount
- NodeAuthorizor
- TLS bootstrapping
- Kubelet的认证
- 准入控制
- 准入控制示例
- Pod安全上下文
- Selinux-Seccomp-Capabilities
- 给容器配置安全上下文
- PodSecurityPolicy
- K8S-1.8手动开启认证与授权
- Helm
- Helm命令
- Chart
- 快速入门
- 内置对象
- 模板函数与管道
- 模板函数列表
- 流程控制
- Chart依赖
- Repository
- 开源的Chart包
- CRD
- CRD入门
- 工作负载
- Pod
- Pod的重启策略
- Container
- 探针
- 工作负载的状态
- 有状态服务
- 网络插件
- Multus
- Calico+Flannel
- 容器网络限速
- 自研网络插件
- 设计文档
- Cilium
- 安装Cilium
- Calico
- Calico-FAQ
- IPAM
- Whereabouts
- 控制平面与Pod网络分开
- 重新编译
- 编译kubeadm
- 编译kubeadm-1.23
- 资源预留
- 资源预留简介
- imagefs与nodefs
- 资源预留 vs 驱逐 vs OOM
- 负载均衡
- 灰度与蓝绿
- Ingress的TLS
- 多个NginxIngressController实例
- Service的会话亲和
- CNI实践
- CNI规范
- 使用cnitool模拟调用
- CNI快速入门
- 性能测试
- 性能测试简介
- 制作kubemark镜像
- 使用clusterloader2进行性能测试
- 编译clusterloader2二进制文件
- 搭建性能测试环境
- 运行density测试
- 运行load测试
- 参数调优
- Measurement
- TestMetrics
- EtcdMetrics
- SLOMeasurement
- PrometheusMeasurement
- APIResponsivenessPrometheus
- PodStartupLatency
- FAQ
- 调度
- 亲和性与反亲和性
- GPU
- HPA
- 命名规范
- 可信云认证
- 磁盘限速
- Virtual-kubelet
- VK思路整理
- Kubebuilder
- FAQ
- 阿里云日志服务SLS
