Skip to content

Create a Kubernetes cluster on Openstack with Kubeadm

Manual cluster creation on Openstack with kubeadm
with the deployment of different modules like UI, Ingress Controller

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Openstack - create servers

export OS_REGION_NAME=xxxx

openstack keypair create certif_k8s --public-key cert_id_rsa.pub
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| created_at  | None                                            |
| fingerprint | 42:d7:9f:c9:bc:c1:00:3b:45:26:92:17:42:fb:fd:7e |
| id          | certif_k8s                                      |
| is_deleted  | None                                            |
| name        | certif_k8s                                      |
| type        | ssh                                             |
| user_id     | c4b0870f499e47fcb10f4684003fc3a2                |
+-------------+-------------------------------------------------+


openstack server group create certification --policy anti-affinity
+----------+--------------------------------------+
| Field    | Value                                |
+----------+--------------------------------------+
| id       | a786a8bd-d316-4b68-b1a6-1ee062a2d508 |
| members  |                                      |
| name     | certification                        |
| policies | anti-affinity                        |
+----------+--------------------------------------+

create security group

security
    openstack security group create  certification
    id: 510079d6-ba0b-4141-8376-67d672acdd21

Add security rules to security group

According to https://kubernetes.io/docs/reference/ports-and-protocols/

rules
openstack security group rule create --dst-port 443 --protocol tcp \
--description "allow https access for all" --remote-ip 0.0.0.0/0 certification

openstack security group rule create --dst-port 80 --protocol tcp \
--description "allow http access for all" --remote-ip 0.0.0.0/0 certification
rules
openstack security group rule create --dst-port 6443 --protocol tcp \
--description "Master API server" --remote-ip 10.0.0.0/8 certification

openstack security group rule create --dst-port 30000:32768  --protocol tcp \
--description "Worker NodePort" --remote-ip 10.0.0.0/8 certification

openstack security group rule create --dst-port 22 --protocol tcp \
--description "SSH access" --remote-ip 10.0.0.0/8 certification
rules
openstack subnet list # to get subnet IP
+--------------------------------------+--------------+--------------------------------------+----------------+
| ID                                   | Name         | Network                              | Subnet         |
+--------------------------------------+--------------+--------------------------------------+----------------+
| 16a5626e-8645-4f16-98ba-7a896e04bc10 | private-flat | 810b8025-8203-45da-adeb-2c83968bfb83 | 10.172.88.0/21 |
+--------------------------------------+--------------+--------------------------------------+----------------+

openstack security group rule create --description "Full Acess between nodes" \
--remote-ip 10.172.88.0/21 certification

Create servers using network private-flat

Creating a dedicated network would imply the creation of subnet and router so... using default network: private-flat

Create servers
  openstack server create  --min 3  --max 3 --image ubuntu-20.04 --flavor ug4.medium --network private-flat \
  --security-group certification --key-name certif_k8s --hint group=a786a8bd-d316-4b68-b1a6-1ee062a2d508 --wait "certif"

  +--------------------------------------+----------+--------+----------------------------+--------------+------------+
  | ID                                   | Name     | Status | Networks                   | Image        | Flavor     |
  +--------------------------------------+----------+--------+----------------------------+--------------+------------+
  | f41c1c0b-636c-4a27-b295-2aa8508e8973 | certif-1 | ACTIVE | private-flat=10.172.89.94  | ubuntu-20.04 | ug4.medium |
  | 39d97161-895d-40d6-93af-c37d9bac2ac5 | certif-2 | ACTIVE | private-flat=10.172.91.49  | ubuntu-20.04 | ug4.medium |
  | be60f3f3-e595-4885-aac9-632833076d4e | certif-3 | ACTIVE | private-flat=10.172.90.186 | ubuntu-20.04 | ug4.medium |
  +--------------------------------------+----------+--------+----------------------------+--------------+------------+

kubeadm - Setup Kubernetes Cluster

Simple cluster : 1 Control plane node + 2 workers nodes

  • Create cluster with kubeadm kubeadm init/join

  • Network chosen: calico

    • So need to add --pod-network-cidr=192.168.0.0/16 in kubeadm init

Prerequisiste

Setup the cluster

On certif-1: Init cluster:

kubeadm init
  kubeadm init --control-plane-endpoint=10.172.89.94 --pod-network-cidr=192.168.0.0/16
  # --control-plane-endpoint allow control plane extension/tuning in HA after

  # if error **kubeadm: [ERROR CRI]: container runtime is not running:
  rm -rf /etc/containerd/; systemctl restart containerd.service`
  # and run again your cmd
Output
    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Alternatively, if you are the root user, you can run:

        export KUBECONFIG=/etc/kubernetes/admin.conf

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/

    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:

        kubeadm join 10.172.89.94:6443 --token q1zxoz.17z6x5lxwo9kklbw \
        --discovery-token-ca-cert-hash sha256:2f10e19e0d019eee3a2083e62b9831f44dcc22097999de2728955374afb17d26 \
        --control-plane

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 10.172.89.94:6443 --token q1zxoz.17z6x5lxwo9kklbw \
    --discovery-token-ca-cert-hash sha256:2f10e19e0d019eee3a2083e62b9831f44dcc22097999de2728955374afb17d26

Deploy Calico Network On certif-1

Setup calico network
    # directly on certif-1

    # kubectl --kubeconfig /etc/kubernetes/admin.conf apply -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
    # kubectl --kubeconfig /etc/kubernetes/admin.conf apply  -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml

    # files downloaded on local
    cd components/calico
    kubectl --kubeconfig ~/Certification/.config/kube_certif create -f tigera-operator.yaml
    kubectl --kubeconfig ~/Certification/.config/kube_certif create -f custom-resources.yaml

    kubectl get pods -n calico-system

calico error

if **calico-node** pods are not Ready `0/1` with error"

```
kubelet    Readiness probe failed: calico/node is not ready: BIRD is not ready:
Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
```

Check whether the nodes are able to communicate via the port in the calico ["Network Configuration"](https://projectcalico.docs.tigera.io/getting-started/kubernetes/requirements){:target="_blank"} section.

If issue is not fixed, it could be a **nodeAddressAutodetectionV4** issue( especially if the server has many interfaces)
as the calico-node pods use `first-found` stragegy by default

You can edit `custom-resources.yaml` to add `nodeAddressAutodetectionV4` key with desired value in [Installation resource](https://projectcalico.docs.tigera.io/networking/ip-autodetection){:target="_blank"}

On worker node - Join cluster

kubeadm join
    kubeadm join 10.172.89.94:6443 --token q1zxoz.17z6x5lxwo9kklbw \
    --discovery-token-ca-cert-hash sha256:2f10e19e0d019eee3a2083e62b9831f44dcc22097999de2728955374afb17d26

Associate Roler Worker to worker nodes: kubectl label node certif-3 node-role.kubernetes.io/worker=""

Ingress Controller Nginx

Deploy Nginx Ingress Controller: https://kubernetes.github.io/ingress-nginx/deploy/

Deploy With Helm Chart
    # https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update

    # Download Chart locally and then deploy
    helm pull  ingress-nginx/ingress-nginx --version 4.1.4 --untar

    # Update values.yml to enable hostPort in order `host port` in order to expose the ctrl on  port 80 & 443
        ## Use host ports 80 and 443
        ## Disabled by default
        hostPort:
            # -- Enable 'hostPort' or not
            enabled: true
            ports:
            # -- 'hostPort' http port
            http: 80
            # -- 'hostPort' https port
            https: 443

    # Deploy
    helm  --kubeconfig ../.config/kube_certif install ingress-ctrl-nginx -n ingress-ctrl --create-namespace   ingress-nginx/
When hostPortis enabled
    # in container section of deplyment, you can see
        name: controller
        ports:
        - containerPort: 80
        hostPort: 80
        name: http
        protocol: TCP
        - containerPort: 443
        hostPort: 443
        name: https
        protocol: TCP

UI Kubernetes Dashboard

Deploy with Helm Chart

  • https://github.com/kubernetes/dashboard

  • https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard

Deploy Kubernetes dashboard
    helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
    helm repo update

    # Download Chart locally and then deploy
    helm pull kubernetes-dashboard/kubernetes-dashboard --version 5.7.0 --untar

    helm  --kubeconfig ../.config/kube_certif install kubernetes-dashboard  -n kubernetes-dashboard --create-namespace  kubernetes-dashboard/

    helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --version 5.7.0  --create-namespace \
    -n kubernetes-dashboard  --description "K8S Dashboard" \
    -f dashboard-values.yaml

Test it with

  • kubectl proxy

  • kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443

Create Ingress for UI

Certficate
openssl req -x509 -new -newkey rsa:4096 -sha256 -nodes -out tls.crt -keyout tls.key \
-subj "/C=FR/ST=France/L=Nantes/O=Enoks/CN=certification.enoks.org/emailAddress=email" -addext "subjectAltName = DNS:certification.enoks.org"

kubectl  --kubeconfig ../.config/kube_certif create secret tls kubernetes-dashboard-tls --cert=tls.crt  --key=tls.key -n kubernetes-dashboard
Nginx Ingress Manifest
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/configuration-snippet: |
        proxy_set_header Accept-Encoding "";
        sub_filter '<base href="/">' '<base href="/dashboard/">';
        sub_filter_once on;

        # To allow user to connect without any authentication, generate default long-lived token and add it like
        # proxy_set_header Authorization "Bearer xxx";
        # proxy_pass_header Authorization;
    nginx.ingress.kubernetes.io/rewrite-target: /$2

labels:
    app.kubernetes.io/name: kubernetes-dashboard
    k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
rules:
- host: certification.enoks.org
    http:
    paths:
    - backend:
        service:
            name: kubernetes-dashboard
            port:
            number: 80
        path: /dashboard(/|$)(.*)
        pathType: Prefix
tls:
- hosts:
- certification.enoks.org
#  secretName: kubernetes-dashboard-certs

Access to UI

Access
echo "$WORKER_IP certification.enoks.org" >> /etc/hosts

https://certification.enoks.org/dashboard/#

Create admin Role and Rolebinding

https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

Cra Sa - role -rolebinding
    # create admin serviceaccount
    kubectl  apply -f components/admin-ui-svc.yaml
    serviceaccount/admin-user created

    # create  admin role
    kubectl  apply -f components/admin-ui-role.yml
    clusterrolebinding.rbac.authorization.k8s.io/admin-user created

    # Generate short-lived  token
    kubectl -n kubernetes-dashboard create token admin-user

Create long-lived Token

Create a token with the secret which bound the service account and the token will be saved in the Secret.

After Secret is created, run this command to get the token: kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d

Create secret with annotation
    apiVersion: v1
    kind: Secret
    metadata:
        name: admin-user
        namespace: kubernetes-dashboard
        annotations:
            kubernetes.io/service-account.name: "admin-user"   
    type: kubernetes.io/service-account-token

Deploy Cinder-CSI - Persistent Volume

CSI Cinder Doc

Deploy
    helm repo add cpo https://kubernetes.github.io/cloud-provider-openstack
    helm repo update

    # values file
    cat > cinder-csi_values.yaml <<EOF
    storageClass:
    enabled: true
    delete:
        isDefault: true
        allowVolumeExpansion: true
    retain:
        isDefault: false
        allowVolumeExpansion: true
    secret:
    enabled: true
    create: true
    name: csi-cloud-config
    data:
        cloud-config: |-
        [Global]
        auth-url="{{ openstack cloud API endpoint }}"
        username="{{ openstack cloud username }}"
        password="{{ openstack cloud password }}"
        domain-name=Default
        region="{{ openstack cloud project name }}"
        tenant-name="{{ openstack cloud region name }}"
    EOF

    # check chart version
    helm search repo cpo/openstack-cinder-csi --versions
    NAME                        CHART VERSION   APP VERSION DESCRIPTION                   
    cpo/openstack-cinder-csi    2.2.0           v1.24.0     Cinder CSI Chart for OpenStack
    cpo/openstack-cinder-csi    2.1.0           v1.23.0     Cinder CSI Chart for OpenStack
    cpo/openstack-cinder-csi    1.4.9           v1.22.0     Cinder CSI Chart for OpenStack
    cpo/openstack-cinder-csi    1.4.8           latest      Cinder CSI Chart for OpenStack
    cpo/openstack-cinder-csi    1.3.9           v1.21.0     Cinder CSI Chart for OpenStack

    # version 2.2.0 don't work for me, error: still connecting to unix:///csi/csi.sock
    # K8s: Server Version: version.Info{Major:"1", Minor:"24"

    # Deploy
    helm install cinder-csi cpo/openstack-cinder-csi --values components/cinder-csi_values.yaml -n kube-system --atomic --version 1.4.9

Test Permission Manager

RBAC Manager
    # https://github.com/sighupio/permission-manager/blob/master/docs/installation.md

    kubectl create ns permission-manager
    kubectl apply -f https://github.com/sighupio/permission-manager/releases/download/v1.7.1-rc1/deploy.yml -n  permission-manager

Delete terminating namespace

Delete terminating namespace

NamespaceDeletionDiscoveryFailure

kubectl proxy

Then open new terminal

  1. get the ns manifest and remove kubernetes from finalizers in ns.json
    kubectl get ns kubernetes-dashboard -o json > ns.json

Edit finalizersfiled as: finalizers: []

  1. send the updated ns.json
curl -k -H "Content-Type: application/json" -X PUT \
 http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/finalize -d @ns.json

Alternatively you can perform step 1 & 2 in a single command if you have jq command

NS=kubernetes-dashboard
curl -X PUT -H "Content-Type: application/json" \
http://127.0.0.1:8001/api/v1/namespaces/${NS}/finalize \
-d "$(kubectl get ns $NS -o json | jq '.spec.finalizers =  []')"