当前位置:网站首页>Develop operator based on kubebuilder (for getting started)

Develop operator based on kubebuilder (for getting started)

2022-06-26 16:15:00 chenxy02

Original address : Use kubebuilder understand k8s crd - You know

understand k8s Of crd You need to understand first k8s Of controller Pattern

 

  • such as kube-controller-manager Medium deployment controller , During initialization, it will be passed in to listen Deployments、ReplicaSet and pod One of the three informer object
  • First list The following objects are cached locally , meanwhile watch Object change , Equal to incremental update
func startDeploymentController(ctx ControllerContext) (controller.Interface, bool, error) {
    dc, err := deployment.NewDeploymentController(
        ctx.InformerFactory.Apps().V1().Deployments(),
        ctx.InformerFactory.Apps().V1().ReplicaSets(),
        ctx.InformerFactory.Core().V1().Pods(),
        ctx.ClientBuilder.ClientOrDie("deployment-controller"),
    )
    if err != nil {
        return nil, true, fmt.Errorf("error creating Deployment controller: %v", err)
    }
    go dc.Run(int(ctx.ComponentConfig.DeploymentController.ConcurrentDeploymentSyncs), ctx.Stop)
    return nil, true, nil
}
  • Then we will do it internally sync In fact, that is Reconcile loop,  Tuning cycle —— To put it bluntly, it is the object of comparison actualState and expectState The difference of , Execute the corresponding add / delete operation   such as deployment Medium expansion and contraction operation

  • Calculate the difference , Can be created pod Number minus all active rs The sum of the number of copies of
    • If the difference is positive , Explain the need for expansion , And from ReplicaSetsBySizeNewer The order is from new to old
    • If the difference is negative , It indicates that the volume needs to be reduced , And from ReplicaSetsBySizeOlder The sorting is from old to new
  • The difference is calculated as follows

deploymentReplicasToAdd := allowedSize - allRSsReplicas

var scalingOperation string
switch {
case deploymentReplicasToAdd > 0:
    sort.Sort(controller.ReplicaSetsBySizeNewer(allRSs))
    scalingOperation = "up"
case deploymentReplicasToAdd < 0:
    sort.Sort(controller.ReplicaSetsBySizeOlder(allRSs))
    scalingOperation = "down"
}

I understand k8s Of controller After the model ,crd You wrote it yourself controller

  • Listen and process your own defined resources ,
  • The following picture is very vivid

install  Kubebuilder

wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.1.0/kubebuilder_linux_amd64
mv kubebuilder_linux_amd64  /usr/local/bin/kubebuilder

Create scaffolding works

  • Create a directory , And specify this directory as the code repository
mkdir -p ~/projects/guestbook
cd ~/projects/guestbook
kubebuilder init --domain my.domain --repo my.domain/guestbook
  • View the file results in this directory
[[email protected] guestbook]# tree
.
├── config
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   └── rbac
│       ├── auth_proxy_client_clusterrole.yaml
│       ├── auth_proxy_role_binding.yaml
│       ├── auth_proxy_role.yaml
│       ├── auth_proxy_service.yaml
│       ├── kustomization.yaml
│       ├── leader_election_role_binding.yaml
│       ├── leader_election_role.yaml
│       ├── role_binding.yaml
│       └── service_account.yaml
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── main.go
├── Makefile
└── PROJECT

6 directories, 24 files

establish api

  • Create a file called webapp/v1 Of API (group/version)
  • Create a new type Guestbook
kubebuilder create api --group webapp --version v1 --kind Guestbook
  • Tracking results , I found many more files , among api/v1/guestbook_types.go Representative definition API The place of ,controllers/guestbook_controller.go Represents the logic of tuning
[[email protected] guestbook]# tree
.
├── api
│   └── v1
│       ├── groupversion_info.go
│       ├── guestbook_types.go
│       └── zz_generated.deepcopy.go
├── bin
│   └── controller-gen
├── config
│   ├── crd
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │       ├── cainjection_in_guestbooks.yaml
│   │       └── webhook_in_guestbooks.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── guestbook_editor_role.yaml
│   │   ├── guestbook_viewer_role.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   └── service_account.yaml
│   └── samples
│       └── webapp_v1_guestbook.yaml
├── controllers
│   ├── guestbook_controller.go
│   └── suite_test.go
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── main.go
├── Makefile
└── PROJECT

13 directories, 37 files
  • Add print log in tuning function , be located controller/guestbook_controller.go
func (r *GuestbookReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    _ = log.FromContext(ctx)

    // your logic here
    log.FromContext(ctx).Info("print_req", "req", req.String())
    return ctrl.Result{}, nil
}

Deploy to k8s In the cluster

  • Compiling and mirroring
make docker-build IMG=guestbook:v1.0     #  Actually implement  docker build -t guestbook:v1.0 .
  • Generally, we need to modify the dockerfile, to go Set up proxy agent , such go mod download The time will not exceed the time limit and cannot be connected

  •   Then you need to manually push the image to the local warehouse , If the bottom one runtime yes ctr Words   need docker save Come out and import
docker save guestbook:v1.0 > a.tar
ctr --namespace k8s.io images import a.tar
  • Because... Is used in the project kube-rbac-proxy, The image may not be downloaded , It needs to be handled manually , as follows :

  •   Now the deployment crd
make deploy IMG=guestbook:v1.0 # Actually implement  kustomize build config/default | kubectl apply -f -
  • Check on guestbook-system ns  The object of the lower deployment

  •   Check api-resources

  • Deploy guestbook
kubectl apply -f config/samples/webapp_v1_guestbook.yaml
# guestbook.webapp.my.domain/guestbook-cxy created
  •   see crd You can see from the log in the tuning function that the

 

原网站

版权声明
本文为[chenxy02]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/177/202206261607015522.html