Kubernetes Archives - TEKSpace Blog https://blog.tekspace.io/category/kubernetes/ Tech tutorials for Linux, Kubernetes, PowerShell, and Azure Tue, 19 Mar 2024 19:59:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://blog.tekspace.io/wp-content/uploads/2023/09/cropped-Tekspace-logo-icon-32x32.png Kubernetes Archives - TEKSpace Blog https://blog.tekspace.io/category/kubernetes/ 32 32 How to create docker registry credentials using kubectl https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/ https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/#respond Tue, 19 Mar 2024 19:55:55 +0000 https://blog.tekspace.io/?p=1839 Dive into our comprehensive guide on seamlessly creating Docker registry credentials with kubectl. Whether you're a beginner or an experienced Kubernetes administrator, this article demystifies the process of securely managing Docker registry access. Learn the step-by-step method to generate and update your regcred secret for Docker registry authentication, ensuring your Kubernetes deployments can pull images without a hitch. Perfect for DevOps professionals and developers alike, this tutorial not only simplifies Kubernetes secrets management but also introduces best practices to maintain continuous access to private Docker images. Enhance your Kubernetes skills today and keep your containerized applications running smoothly.

The post How to create docker registry credentials using kubectl appeared first on TEKSpace Blog.

]]>
Updating a Docker registry secret (often named regcred in Kubernetes environments) with new credentials can be essential for workflows that need access to private registries for pulling images. This process involves creating a new secret with the updated credentials and then patching or updating the deployments or pods that use this secret.

Here’s a step-by-step guide to do it:

Step 1: Create a New Secret with Updated Credentials

  1. Log in to Docker Registry: Before updating the secret, ensure you’re logged into the Docker registry from your command line interface so that Kubernetes can access it.
  2. Create or Update the Secret: Use the kubectl create secret command to create a new secret or update an existing one with your Docker credentials. If you’re updating an existing secret, you might need to delete the old secret first. To create a new secret (or replace an existing one)
kubectl create secret docker-registry regcred \
  --docker-server=<YOUR_REGISTRY_SERVER> \ # The URL of your Docker registry
  --docker-username=<YOUR_USERNAME> \ # Your Docker registry username
  --docker-password=<YOUR_PASSWORD> \ # Your Docker registry password
  --docker-email=<YOUR_EMAIL> \ # Your Docker registry email
  --namespace=<NAMESPACE> \ # The Kubernetes namespace where the secret will be used
  --dry-run=client -o yaml | kubectl apply -f -

Replace <YOUR_REGISTRY_SERVER>, <YOUR_USERNAME>, <YOUR_PASSWORD>, <YOUR_EMAIL>, and <NAMESPACE> with your Docker registry details and the appropriate namespace. The --dry-run=client -o yaml | kubectl apply -f - part generates the secret definition and applies it to your cluster, effectively updating the secret if it already exists.

Step 2: Update Deployments or Pods to Use the New Secret

If you’ve created a new secret with a different name, you’ll need to update your deployment or pod specifications to reference the new secret name. This step is unnecessary if you’ve updated an existing secret.

  1. Edit Deployment or Pod Specification: Locate your deployment or pod definition files (YAML files) and update the imagePullSecrets section to reference the new secret name if it has changed.
  2. Apply the Changes: Use kubectl apply -f <deployment-or-pod-file>.yaml to apply the changes to your cluster.

Step 3: Verify the Update

Ensure that your deployments or pods can successfully pull images using the updated credentials.

  1. Check Pod Status: Use kubectl get pods to check the status of your pods. Ensure they are running and not stuck in a ImagePullBackOff or similar error status due to authentication issues.
  2. Check Logs: For further verification, check the logs of your pods or deployments to ensure there are no errors related to pulling images from the Docker registry. You can use kubectl logs <pod-name> to view logs.

This method ensures that your Kubernetes deployments can continue to pull images from private registries without interruption, using the updated credentials.

The post How to create docker registry credentials using kubectl appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/feed/ 0
Deploying Kubernetes Dashboard in K3S Cluster https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/ https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/#respond Wed, 30 Aug 2023 15:02:19 +0000 https://blog.tekspace.io/index.php/2020/10/27/deploying-kubernetes-dashboard-in-k3s-cluster/ Get the latest Kubernetes Dashboard and deploy Create service account and role In admin-service user.yaml, enter the following values: In admin-user-role.yaml, enter the following values: Now apply changes to deploy it to K3S cluster: Expose service as NodePort to access from browser In edit mode change type: ClusterIP to type: NodePort. And save it. Your […]

The post Deploying Kubernetes Dashboard in K3S Cluster appeared first on TEKSpace Blog.

]]>
Get the latest Kubernetes Dashboard and deploy
GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
sudo k3s kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

Create service account and role

In admin-service user.yaml, enter the following values:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

In admin-user-role.yaml, enter the following values:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Now apply changes to deploy it to K3S cluster:

sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml

Expose service as NodePort to access from browser

sudo k3s kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

In edit mode change type: ClusterIP to type: NodePort. And save it. Your file should look like below:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-10-27T14:32:58Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  resourceVersion: "72638"
  selfLink: /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard
  uid: 8282464c-607f-4e40-ad5c-ee781e83d5f0
spec:
  clusterIP: 10.43.210.41
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30353
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

You can get the port number by executing the below command:

sudo k3s kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.43.210.41   <none>        443:30353/TCP   3h39m

In my Kubernetes cluster, I received a port number 30353 as shown in the above output. In your case, it might be different. This port is exposed on each worker node and master. You can browse one of your worker node IP addresses with port at the end, and you will see a login page.

Get token of a service account

sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token

It will output a token in your console. Grab that token and insert it in to token input box.

My Dashboard link: https://192.168.1.21:30353

All done!

The post Deploying Kubernetes Dashboard in K3S Cluster appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/feed/ 0
Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/ https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/#respond Mon, 26 Oct 2020 21:35:27 +0000 https://blog.tekspace.io/index.php/2020/10/26/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/ Setup K3S Cluster By default, Rancher K3S comes with Traefik 1.7. We will setup K3S without Traefik ingress in this tutorial. Execute the above command on master node 2 to setup HA.Validate cluster setup: Make sure you have HA Proxy Setup: We will use above command output value to join worker nodes: MetalLB Setup Create […]

The post Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal appeared first on TEKSpace Blog.

]]>
Setup K3S Cluster

By default, Rancher K3S comes with Traefik 1.7. We will setup K3S without Traefik ingress in this tutorial.

  1. Execute below command on master node 1.
curl -sfL https://get.k3s.io | sh -s - server   --datastore-endpoint="mysql://user:pass@tcp(ip_address:3306)/databasename" --disable traefik --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 192.168.1.2 --tls-san k3s.home.lab

Execute the above command on master node 2 to setup HA.
Validate cluster setup:

$ sudo kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k3s-master-1   Ready    master   3m9s   v1.18.9+k3s1

Make sure you have HA Proxy Setup:

##########################################################
#               Kubernetes AP ILB
##########################################################
frontend kubernetes-frontend
    bind 192.168.1.2:6443
    mode tcp
    option tcplog
    default_backend kubernetes-backend

backend kubernetes-backend
    mode tcp
    option tcp-check
    balance roundrobin
    server k3s-master-1 192.168.1.10:6443 check fall 3 rise 2
    server k3s-master-2 192.168.1.20:6443 check fall 3 rise 2
  1. Join worker nodes to K3S Cluster
    Get node token from one of the master node by executing below command:
sudo cat /var/lib/rancher/k3s/server/node-tokenK105c8c5de8deac516ebgd454r45547481d70625ee3e5200acdbe8ea071191debd4::server:gd5de354807077fde4259fd9632ea045454

We will use above command output value to join worker nodes:

curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.2:6443 K3S_TOKEN={{USE_TOKEN_FROM_ABOVE}} sh -
  1. Validate K3S cluster state:
NAME                STATUS   ROLES    AGE     VERSION
k3s-master-1        Ready    master   15m     v1.18.9+k3s1
k3s-worker-node-1   Ready    <none>   3m44s   v1.18.9+k3s1
k3s-worker-node-2   Ready    <none>   2m52s   v1.18.9+k3s1
k3s-master-2        Ready    master   11m     v1.18.9+k3s1

MetalLB Setup

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Create a file called metallb-config.yaml and enter the following values:

apiVersion: v1
kind: ConfigMap
metadata:
    namespace: metallb-system
    name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

Apply changes:

sudo kubectl apply -f metallb-config.yaml

Deploy sample application with service

kubectl create deploy nginx --image nginxkubectl expose deploy nginx --port 80

Check status:

$ kubectl get svc,pods
NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/kubernetes                                ClusterIP      10.43.0.1       <none>          443/TCP                      44m
service/nginx                                     ClusterIP      10.43.14.116    <none>          80/TCP                       31s

NAME                                                 READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-25lpb                            1/1     Running   0          59s

Nginx Ingress setup

In this tutorial, I will be using helm to setup Nginx ingress controller.

  1. Execute the following commands to setup Nginx ingress from client machine with helm, kubectl configured:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install home ingress-nginx/ingress-nginx

Check Ingress controller status:

kubectl --namespace default get services -o wide -w home-ingress-nginx-controller
  1. Setup Ingress by creating home-ingress.yaml and add below values. Replace example.io
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: home-ingress
  namespace: default
spec:
  rules:
    - host: example.io
      http:
        paths:
          - backend:
              serviceName: nginx
              servicePort: 80
            path: /

Execute command to apply:

 kubectl apply -f home-ingress.yaml

Check Status on Ingress with `kubectl get ing` command:

$ kubectl get ing
NAME           CLASS    HOSTS           ADDRESS         PORTS   AGE
home-ingress   <none>   example.io   192.168.1.240   80      8m26s

Letsencrypt setup

  1. Execute below command to create namespaces, pods, and other related configurations:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.0.3/cert-manager.yaml

Once above completes lets validate pods status.
2. Validate setup:

$ kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-cainjector-76c9d55b6f-cp2jf   1/1     Running   0          39s
cert-manager-79c5f9946-qkfzv               1/1     Running   0          38s
cert-manager-webhook-6d4c5c44bb-4mdgc      1/1     Running   0          38s
  1. Setup staging environment by applying the changes below. Update email:
vi staging_issure.yaml

and paste the below values and save the file:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: letsencrypt-staging
spec:
 acme:
   # The ACME server URL
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: john@example.com
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply changes:

kubectl apply -f staging_issure.yaml

We will apply production issure later in this tutorial. We should first test SSL settings prior to making changes to use production certificates.

SSL setup with LetsEncrypt and Nginx Ingress

Before proceeding here, please make sure your dns is setup correctly from your cloud provider or in your home lab to allow traffic from the internet. LetsEncrypt uses HTTP validation to issue certificates, and it needs to reach the correct dns alias from where the cert request has been initiated.

Create new ingress file as shown below:

vi home-ingress-ssl.yaml

Copy and paste in above file:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt-staging
  name: home-ingress
  namespace: default
spec:
  tls:
  - hosts:
    - example.io
    secretName: home-example-io-tls
  rules:
  - host: example.io
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

Apply changes:

kubectl apply -f home-ingress-ssl.yaml

Validate certificate creation:

kubectl describe certificate
Spec:
  Dns Names:
    example.io
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       Issuer
    Name:       letsencrypt-staging
  Secret Name:  home-example-io-tls
Status:
  Conditions:
    Last Transition Time:        2020-10-26T20:19:15Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      False
    Type:                        Ready
    Last Transition Time:        2020-10-26T20:19:18Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      True
    Type:                        Issuing
  Next Private Key Secret Name:  home-example-io-tls-76dqg
Events:
  Type    Reason     Age   From          Message
  ----    ------     ----  ----          -------
  Normal  Issuing    10s   cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  8s    cert-manager  Stored new private key in temporary Secret resource "home-example-io-tls-76dqg"
  Normal  Requested  4s    cert-manager  Created new CertificateRequest resource "home-example-io-tls-h98zf"

Now you can browse your dns URL and validate your certificate. If you see something like below, that means your LetsEncrypt certificate management has been setup successfully.

Set production issure to get valid certificate

Create production issure:

vi production-issure.yaml

Copy and paste the below values into the above file. Update email:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: letsencrypt-prod
spec:
 acme:
   # The ACME server URL
   server: https://acme-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: user@example.com
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply changes:

kubectl apply -f production-issure.yaml

Update home-ingress-ssl.yaml file you created earlier with below values:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt-prod
  name: home-ingress
  namespace: default
spec:
  tls:
  - hosts:
    - example.io
    secretName: home-example-io-tls
  rules:
  - host: example.io
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

Apply changes:

kubectl apply -f home-ingress-ssl.yaml

Validate changes:

NOTE: Give it some time as it may take 2-5 mins to get the cert request to complete.

kubectl describe certificate

Your output should look something like below to get a valid certificate.

Spec:
  Dns Names:
    example.io
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       Issuer
    Name:       letsencrypt-prod
  Secret Name:  home-example-io-tls
Status:
  Conditions:
    Last Transition Time:  2020-10-26T20:43:35Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2021-01-24T19:43:25Z
  Not Before:              2020-10-26T19:43:25Z
  Renewal Time:            2020-12-25T19:43:25Z
  Revision:                2
Events:
  Type    Reason     Age                From          Message
  ----    ------     ----               ----          -------
  Normal  Issuing    24m                cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  24m                cert-manager  Stored new private key in temporary Secret resource "home-example-io-tls-76dqg"
  Normal  Requested  24m                cert-manager  Created new CertificateRequest resource "home-example-io-tls-h98zf"
  Normal  Issuing    105s               cert-manager  Issuing certificate as Secret was previously issued by Issuer.cert-manager.io/letsencrypt-staging
  Normal  Reused     103s               cert-manager  Reusing private key stored in existing Secret resource "home-example-io-tls"
  Normal  Requested  100s               cert-manager  Created new CertificateRequest resource "home-example-io-tls-ccxgf"
  Normal  Issuing    30s (x2 over 23m)  cert-manager  The certificate has been successfully issued

Browse your application and check for a valid certificate. If it looks something like below, that means you have successfully requested a valid certificate from LetsEncrypt certificate authority.

The post Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/feed/ 0
Kubernetes On-Prem Load Balancing implementation using MetalLB https://blog.tekspace.io/kubernetes-on-prem-load-balancing-implementation-using-metallb/ https://blog.tekspace.io/kubernetes-on-prem-load-balancing-implementation-using-metallb/#respond Sun, 25 Oct 2020 23:21:31 +0000 https://blog.tekspace.io/index.php/2020/10/25/kubernetes-on-prem-load-balancing-implementation-using-metallb/ In this tutorial I will go over how to install and setup MetalLB for on-premise implementation. Pre-req Installing MetalLB NOTE: The above example was used from here Validate deployments by executing the below command: Apply network configurations Create a file called metallb-config.yaml and enter the following values: Apply changes: And you are done!

The post Kubernetes On-Prem Load Balancing implementation using MetalLB appeared first on TEKSpace Blog.

]]>
In this tutorial I will go over how to install and setup MetalLB for on-premise implementation.

Pre-req

  1. Kubernetes cluster either K8S or K3S.

Installing MetalLB

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

NOTE: The above example was used from here

Validate deployments by executing the below command:

kubectl get pods -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-5854d49f77-p6q5w   1/1     Running   0          106s
speaker-tt29h                 1/1     Running   0          106s
speaker-fggsk                 1/1     Running   0          106s
speaker-wb4nl                 1/1     Running   0          106s
speaker-8gjtj                 1/1     Running   0          106s

Apply network configurations

Create a file called metallb-config.yaml and enter the following values:

apiVersion: v1
kind: ConfigMap
metadata:
    namespace: metallb-system
    name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

Apply changes:

sudo kubectl apply -f metallb-config.yaml

And you are done!

The post Kubernetes On-Prem Load Balancing implementation using MetalLB appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/kubernetes-on-prem-load-balancing-implementation-using-metallb/feed/ 0
Raspberry Pi 4 K3S cluster setup https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/ https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/#respond Sun, 25 Oct 2020 17:22:40 +0000 https://blog.tekspace.io/index.php/2020/10/25/raspberry-pi-4-k3s-cluster-setup/ In this tutorial, I will show you how to set up a lightweight Kubernetes cluster using rancher k3s. My current Raspberry Pi 4 configuration: Hostname RAM CPU Disk IP Address k3s-master-1 8 GB 4 64 GB 192.168.1.10 k3s-worker-node-1 8 GB 4 64 GB 192.168.1.11 k3s-worker-node-2 8 GB 4 64 GB 192.168.1.12 Prerequisite set values as shown below: Apply changes by executing […]

The post Raspberry Pi 4 K3S cluster setup appeared first on TEKSpace Blog.

]]>
In this tutorial, I will show you how to set up a lightweight Kubernetes cluster using rancher k3s.

My current Raspberry Pi 4 configuration:

HostnameRAMCPUDiskIP Address
k3s-master-18 GB464 GB192.168.1.10
k3s-worker-node-18 GB464 GB192.168.1.11
k3s-worker-node-28 GB464 GB192.168.1.12

Prerequisite

  1. Ubuntu OS 20.04. Follow guide on here.
  2. Assign static IP address to PI.
sudo vi /etc/netplan/50-cloud-init.yaml

set values as shown below:

  • dhcp set value from true to no.
  • As shown below added 5-9 lines below dhcp, change IP address as per your network setup and save the file.
network:
    ethernets:
        eth0:
            dhcp4: no
            addresses:
              - 192.168.1.10/24
            gateway4: 192.168.1.1
            nameservers:
              addresses: [192.168.1.1, 8.8.8.8, 1.1.1.1]
            match:
                driver: bcmgenet smsc95xx lan78xx
            optional: true
            set-name: eth0
    version: 2

Apply changes by executing the below command to set static IP:

sudo netplan apply

NOTE: Once you apply changes, your ssh session will be interrupted. Make sure to reconnect using ssh.

  1. Open sudo vi /boot/firmware/cmdline.txt in edit mode and enter below value and apply this same change on each master and work nodes:
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

Mine looks like below.

net.ifnames=0 dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
  1. reboot
sudo init 6

Getting started with HA setup for Master nodes

Before we proceed with the tutorial, please make sure you have a MySQL database configured. We will use MySQL to set up HA.

On each master node, execute the below command to set up k3s cluster.

curl -sfL https://get.k3s.io | sh -s - server   --datastore-endpoint="mysql://username:password@tcp(192.168.1.50:3306)/kdb"

Once the installation is completed, execute the below command to see the status of the k3s service.

Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2020-10-25 04:38:00 UTC; 32s ago
       Docs: https://k3s.io
    Process: 1696 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 1720 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 1725 (k3s-server)
      Tasks: 13
     Memory: 425.1M
     CGroup: /system.slice/k3s.service
             ├─1725 /usr/local/bin/k3s server --datastore-endpoint=mysql://username:password@tcp(192.168.1.50:3306)/kdb
             └─1865 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd -->

Joining worker nodes to k3s cluster

  1. First, we need to get a token from the master node. Go to your master node and enter the below command to get token:
sudo cat /var/lib/rancher/k3s/server/node-token
  1. Second, we are ready to join all the nodes to the cluster by entering the below command and replacing the XXX token with your own:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 K3S_TOKEN=XXX sh -
  1. Check the status of your worker node status from master node by executing the below command:
sudo kubectl get nodes
ubuntu@k3s-master-1:~$ sudo kubectl get nodes
NAME                STATUS   ROLES    AGE   VERSION
k3s-worker-node-2   Ready    <none>   28m   v1.18.9+k3s1
k3s-master-1        Ready    master   9h    v1.18.9+k3s1
k3s-worker-node-1   Ready    <none>   32m   v1.18.9+k3s1
k3s-master-2        Ready    master   9h    v1.18.9+k3s1

The post Raspberry Pi 4 K3S cluster setup appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/feed/ 0
Rancher Kubernetes Single Node Setup https://blog.tekspace.io/rancher-kubernetes-single-node-setup/ https://blog.tekspace.io/rancher-kubernetes-single-node-setup/#respond Sun, 18 Oct 2020 02:48:13 +0000 https://blog.tekspace.io/index.php/2020/10/18/rancher-kubernetes-single-node-setup/ Pre-req Download rke package and set executable permissions RKE Cluster setup First, we must setup rke cluster configuration file to deploy it to rke node where the cluster will be setup. Continue with interactive configurations to setup single node cluster. Run the below command to setup rke cluster Output: Connecting to Kubernetes cluster HELM Installation […]

The post Rancher Kubernetes Single Node Setup appeared first on TEKSpace Blog.

]]>
Pre-req
  1. VM requirements
    • One Ubuntu 20.04 VM node where RKE Cluster will be running.
    • One Ubuntu 20.04 host node where RKE CLI will be configured to use to setup cluster.
  2. Disable swap and firewall
sudo ufw disable
sudo swapoff -a; sudo sed -i '/swap/d' /etc/fstab
  1. Update sysctl settings
sudo cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
  1. Docker installed on all nodes
    • Login to Ubuntu VM with your sudo account.
    • Execute the following commands:
sudo apt-get update
sudo apt-get upgrade
sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh
  1. New User and add to Docker group
sudo adduser rkeuser
sudo passwd rkeuser >/dev/null 2>&1
sudo usermod -aG docker rkeuser
  1. SSH Key Gen and copy keys
ssh-keygen -t rsa -b 2048
ssh-copy-id rkeuser@192.168.1.188

Download rke package and set executable permissions

wget https://github.com/rancher/rke/releases/download/v1.1.0/rke_linux-amd64
sudo cp rke_linux-amd64 /usr/local/bin/rke
sudo chmod +x /usr/local/bin/rke

RKE Cluster setup

First, we must setup rke cluster configuration file to deploy it to rke node where the cluster will be setup. Continue with interactive configurations to setup single node cluster.

rke config

Run the below command to setup rke cluster

rke up

Output:

INFO[0000] Running RKE version: v1.1.9
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [192.168.1.188] 
INFO[0000] Checking if container [cluster-state-deployer] is running on host [192.168.1.188], try #1 
INFO[0000] Pulling image [rancher/rke-tools:v0.1.65] on host [192.168.1.188], try #1 
INFO[0005] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0005] Starting container [cluster-state-deployer] on host [192.168.1.188], try #1 
INFO[0005] [state] Successfully started [cluster-state-deployer] container on host [192.168.1.188] 
INFO[0005] [certificates] Generating CA kubernetes certificates 
INFO[0005] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0006] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates 
INFO[0006] [certificates] Generating Kubernetes API server certificates
INFO[0006] [certificates] Generating Service account token key 
INFO[0006] [certificates] Generating Kube Controller certificates
INFO[0006] [certificates] Generating Kube Scheduler certificates 
INFO[0006] [certificates] Generating Kube Proxy certificates
INFO[0006] [certificates] Generating Node certificate
INFO[0006] [certificates] Generating admin certificates and kubeconfig
INFO[0006] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0006] [certificates] Generating kube-etcd-192-168-1-188 certificate and key
INFO[0006] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0006] Building Kubernetes cluster
INFO[0006] [dialer] Setup tunnel for host [192.168.1.188]
INFO[0007] [network] Deploying port listener containers 
INFO[0007] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0007] Starting container [rke-etcd-port-listener] on host [192.168.1.188], try #1 
INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.1.188] 
INFO[0007] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0007] Starting container [rke-cp-port-listener] on host [192.168.1.188], try #1 
INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [192.168.1.188] 
INFO[0008] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0008] Starting container [rke-worker-port-listener] on host [192.168.1.188], try #1 
INFO[0008] [network] Successfully started [rke-worker-port-listener] container on host [192.168.1.188] 
INFO[0008] [network] Port listener containers deployed successfully
INFO[0008] [network] Running control plane -> etcd port checks
INFO[0008] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0008] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Running control plane -> worker port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0009] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Running workers -> control plane port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0009] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Checking KubeAPI port Control Plane hosts
INFO[0009] [network] Removing port listener containers  
INFO[0009] Removing container [rke-etcd-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.1.188] 
INFO[0010] Removing container [rke-cp-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.1.188]
INFO[0010] Removing container [rke-worker-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.1.188]
INFO[0010] [network] Port listener containers removed successfully
INFO[0010] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0010] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1
INFO[0010] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0010] Starting container [cert-deployer] on host [192.168.1.188], try #1 
INFO[0010] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1 
INFO[0015] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1 
INFO[0015] Removing container [cert-deployer] on host [192.168.1.188], try #1 
INFO[0015] [reconcile] Rebuilding and updating local kube config 
INFO[0015] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0015] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0015] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.1.188]
INFO[0015] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0016] Starting container [file-deployer] on host [192.168.1.188], try #1
INFO[0016] Successfully started [file-deployer] container on host [192.168.1.188] 
INFO[0016] Waiting for [file-deployer] container to exit on host [192.168.1.188]
INFO[0016] Waiting for [file-deployer] container to exit on host [192.168.1.188]
INFO[0016] Container [file-deployer] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0017] Waiting for [file-deployer] container to exit on host [192.168.1.188] 
INFO[0017] Removing container [file-deployer] on host [192.168.1.188], try #1
INFO[0017] [remove/file-deployer] Successfully removed container on host [192.168.1.188] 
INFO[0017] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes
INFO[0017] [reconcile] Reconciling cluster state
INFO[0017] [reconcile] This is newly generated cluster
INFO[0017] Pre-pulling kubernetes images
INFO[0017] Pulling image [rancher/hyperkube:v1.18.9-rancher1] on host [192.168.1.188], try #1
INFO[0047] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0047] Kubernetes images pulled successfully
INFO[0047] [etcd] Building up etcd plane..
INFO[0047] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0047] Starting container [etcd-fix-perm] on host [192.168.1.188], try #1 
INFO[0047] Successfully started [etcd-fix-perm] container on host [192.168.1.188] 
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188]
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188]
INFO[0047] Container [etcd-fix-perm] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0048] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188] 
INFO[0048] Removing container [etcd-fix-perm] on host [192.168.1.188], try #1
INFO[0048] [remove/etcd-fix-perm] Successfully removed container on host [192.168.1.188] 
INFO[0048] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.1.188], try #1
INFO[0051] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.1.188] 
INFO[0051] Starting container [etcd] on host [192.168.1.188], try #1 
INFO[0051] [etcd] Successfully started [etcd] container on host [192.168.1.188] 
INFO[0051] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.1.188]
INFO[0051] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0051] Starting container [etcd-rolling-snapshots] on host [192.168.1.188], try #1 
INFO[0051] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.1.188] 
INFO[0056] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0057] Starting container [rke-bundle-cert] on host [192.168.1.188], try #1 
INFO[0057] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.1.188] 
INFO[0057] Waiting for [rke-bundle-cert] container to exit on host [192.168.1.188]
INFO[0057] Container [rke-bundle-cert] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0058] Waiting for [rke-bundle-cert] container to exit on host [192.168.1.188] 
INFO[0058] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.1.188]
INFO[0058] Removing container [rke-bundle-cert] on host [192.168.1.188], try #1
INFO[0058] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0058] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0059] [etcd] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0059] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0059] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0059] [etcd] Successfully started etcd plane.. Checking etcd cluster health
INFO[0059] [controlplane] Building up Controller Plane.. 
INFO[0059] Checking if container [service-sidekick] is running on host [192.168.1.188], try #1
INFO[0059] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0059] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0059] Starting container [kube-apiserver] on host [192.168.1.188], try #1 
INFO[0059] [controlplane] Successfully started [kube-apiserver] container on host [192.168.1.188] 
INFO[0059] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.1.188]
INFO[0067] [healthcheck] service [kube-apiserver] on host [192.168.1.188] is healthy 
INFO[0067] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0068] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0068] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0068] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0068] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0068] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0068] Starting container [kube-controller-manager] on host [192.168.1.188], try #1 
INFO[0068] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.1.188] 
INFO[0068] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.1.188]
INFO[0074] [healthcheck] service [kube-controller-manager] on host [192.168.1.188] is healthy 
INFO[0074] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0074] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0074] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0074] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0075] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0075] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0075] Starting container [kube-scheduler] on host [192.168.1.188], try #1
INFO[0075] [controlplane] Successfully started [kube-scheduler] container on host [192.168.1.188] 
INFO[0075] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.1.188]
INFO[0080] [healthcheck] service [kube-scheduler] on host [192.168.1.188] is healthy 
INFO[0080] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0080] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0081] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0081] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0081] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0081] [controlplane] Successfully started Controller Plane..
INFO[0081] [authz] Creating rke-job-deployer ServiceAccount
INFO[0081] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0081] [authz] Creating system:node ClusterRoleBinding
INFO[0081] [authz] system:node ClusterRoleBinding created successfully
INFO[0081] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
INFO[0081] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
INFO[0081] Successfully Deployed state file at [./cluster.rkestate]
INFO[0081] [state] Saving full cluster state to Kubernetes
INFO[0081] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state 
INFO[0081] [worker] Building up Worker Plane..
INFO[0081] Checking if container [service-sidekick] is running on host [192.168.1.188], try #1
INFO[0081] [sidekick] Sidekick container already created on host [192.168.1.188]
INFO[0081] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0081] Starting container [kubelet] on host [192.168.1.188], try #1 
INFO[0081] [worker] Successfully started [kubelet] container on host [192.168.1.188] 
INFO[0081] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.1.188]
INFO[0092] [healthcheck] service [kubelet] on host [192.168.1.188] is healthy 
INFO[0092] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0092] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0093] [worker] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0093] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0093] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0093] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0093] Starting container [kube-proxy] on host [192.168.1.188], try #1
INFO[0093] [worker] Successfully started [kube-proxy] container on host [192.168.1.188] 
INFO[0093] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.1.188]
INFO[0098] [healthcheck] service [kube-proxy] on host [192.168.1.188] is healthy 
INFO[0098] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0099] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0099] [worker] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0099] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0099] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0099] [worker] Successfully started Worker Plane..
INFO[0099] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0099] Starting container [rke-log-cleaner] on host [192.168.1.188], try #1 
INFO[0099] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.1.188] 
INFO[0099] Removing container [rke-log-cleaner] on host [192.168.1.188], try #1
INFO[0100] [remove/rke-log-cleaner] Successfully removed container on host [192.168.1.188] 
INFO[0100] [sync] Syncing nodes Labels and Taints
INFO[0100] [sync] Successfully synced nodes Labels and Taints
INFO[0100] [network] Setting up network plugin: canal   
INFO[0100] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0100] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0100] [addons] Executing deploy job rke-network-plugin
INFO[0115] [addons] Setting up coredns
INFO[0115] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0115] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0115] [addons] Executing deploy job rke-coredns-addon
INFO[0120] [addons] CoreDNS deployed successfully       
INFO[0120] [dns] DNS provider coredns deployed successfully
INFO[0120] [addons] Setting up Metrics Server
INFO[0120] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0120] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0120] [addons] Executing deploy job rke-metrics-addon
INFO[0130] [addons] Metrics Server deployed successfully 
INFO[0130] [ingress] Setting up nginx ingress controller
INFO[0130] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0130] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0130] [addons] Executing deploy job rke-ingress-controller
INFO[0140] [ingress] ingress controller nginx deployed successfully 
INFO[0140] [addons] Setting up user addons
INFO[0140] [addons] no user addons defined
INFO[0140] Finished building Kubernetes cluster successfully

Connecting to Kubernetes cluster

  1. Download latest kubectl
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
  1. Assign executable permissions
chmod +x ./kubectl
  1. Move file to default executable location
sudo mv ./kubectl /usr/local/bin/kubectl
  1. Check kubectl version
kubectl version --client
  1. Copy rancher exported kube cluster YAML file to $HOME/.kube/config
mkdir -p $HOME/.kube
cp kube_config_cluster.yml $HOME/.kube/config
  1. Connect to Kubernetes cluster and get pods
kubectl get pods -A

HELM Installation

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Setup Rancher in Kubernetes cluster

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v0.15.0

Set cert-manager

Define DNS cert request. You can replace rancher.my.org with your own DNS alias.

helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=rancher.my.org
kubectl -n cattle-system rollout status deploy/rancher

NOTE: Make sure to add rancher.my.org in the host entry of your system if you are working in a lab environment if you don’t have a DNS.

Rancher UI instructions here followed from here.

The post Rancher Kubernetes Single Node Setup appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/rancher-kubernetes-single-node-setup/feed/ 0
Managing MySQL database using PHPMyAdmin in Kubernetes https://blog.tekspace.io/managing-mysql-database-using-php/ https://blog.tekspace.io/managing-mysql-database-using-php/#respond Tue, 15 Sep 2020 01:35:49 +0000 https://blog.tekspace.io/index.php/2020/09/15/managing-mysql-database-using-php/ Make sure you are familiar with connecting to a Kubernetes cluster, have the Nginx ingress controller configured with a certificate manager, and have a MySQL database pod deployed. Only then, can you proceed. Follow this guide if you do not have MySQL deployed. Follow this guide to set up Nginx ingress and cert manager. PhpMyAdmin […]

The post Managing MySQL database using PHPMyAdmin in Kubernetes appeared first on TEKSpace Blog.

]]>

Make sure you are familiar with connecting to a Kubernetes cluster, have the Nginx ingress controller configured with a certificate manager, and have a MySQL database pod deployed. Only then, can you proceed. Follow this guide if you do not have MySQL deployed. Follow this guide to set up Nginx ingress and cert manager.

PhpMyAdmin is a popular open source tool to manage MySQL database server. Learn how to create a deployment and expose it as a service to access PhpMyAdmin from the internet using Nginx ingress controller.

  1. Create a deployment file called phpmyadmin-deployment.yaml and paste the following values:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: phpmyadmin-deployment
  labels:
    app: phpmyadmin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: phpmyadmin
  template:
    metadata:
      labels:
        app: phpmyadmin
    spec:
      containers:
        - name: phpmyadmin
          image: phpmyadmin/phpmyadmin
          ports:
            - containerPort: 80
          env:
            - name: PMA_HOST
              value: mysql-service
            - name: PMA_PORT
              value: "3306"
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secrets
                  key: ROOT_PASSWORD

NOTE: ROOT_PASSWORD value will be consumed from Kubernetes secrets. If you want to learn more about Kubernetes secrets. Follow this guide.

  1. Execute the below command to create a new deployment:
kubectl apply -f phpmyadmin-deployment.yaml

Output:

deployment.apps/phpmyadmin-deployment created

Exposing PhpMyAdmin via Services

  1. Create new file called phpmyadmin-service.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: phpmyadmin-service
spec:
  type: NodePort
  selector:
    app: phpmyadmin
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  1. Execute the below command to create service:
kubectl apply -f phpmyadmin-service.yaml

Output:

service/phpmyadmin-service created

Once you are done with the above configurations, it’s time to exposePhpMyAdminn service via internet.

I use DigitalOcean-managed Kubernetes. I manage my own DNS, and DigitalOcean automatically creates a load balancer for my Nginx ingress controller. Once again if you want to follow this guide, it will be very helpful.

Nginx Ingress configuration

  1. Create a new YAML file called phpmyadmin-ingress.yaml and paste the following values:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - phpmyadmin.example.com
    secretName: echo-tls
  rules:
  - host: mydemo.example.com
    http:
      paths:
      - backend:
          serviceName: phpmyadmin-service
          servicePort: 80
  1. Apply the changes:
kubectl apply -f mydemo-ingress.yaml

The post Managing MySQL database using PHPMyAdmin in Kubernetes appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/managing-mysql-database-using-php/feed/ 0
How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/ https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/#respond Mon, 14 Sep 2020 18:40:34 +0000 https://blog.tekspace.io/index.php/2020/09/14/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/ NOTE: This tutorial assumes you know how to connect to a Kubernetes cluster. Create secrets to securely store MySQL credentials Output: Persistant volume and MySQL deployment Execute the below command to create persistent volume: Output: Execute the below command to deploy MySQL pod: Output: Exposing MySQL as a Service Output: Output:

The post How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster appeared first on TEKSpace Blog.

]]>

NOTE: This tutorial assumes you know how to connect to a Kubernetes cluster.

Create secrets to securely store MySQL credentials

  1. Guide on how to create base 64 encoded values:

    Windows 10 guideLinux guide

  2. Create a new file called: mysql-secret.yaml and paste the value below.
    NOTE: You must first capture the value in base 64 by following the guide in step 1.
---
apiVersion: v1
kind: Secret
metadata:
  name: mysqldb-secrets
type: Opaque
data:
  ROOT_PASSWORD: c3VwZXItc2VjcmV0LXBhc3N3b3JkLWZvci1zcWw= 
  1. Execute the below command to create secrets:
kubectl apply -f mysql-secret.yaml

Output:

secret/mysqldb-secrets created
  1. To see if the secret is created, execute the below command:
kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-jqq69   kubernetes.io/service-account-token   3      6h20m
echo-tls              kubernetes.io/tls                     2      5h19m
mysqldb-secrets       Opaque                                1      42s
  1. To see the description of the secret, execute the below command:
kubectl describe secret mysqldb-secrets
Name:         mysqldb-secrets
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
ROOT_PASSWORD:  29 bytes

Persistant volume and MySQL deployment

  1. Create a persistent volume YAML file called: mysql-pvc.yaml and paste the following values:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pvc
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/mysql-data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage
  1. Create a new deployment YAML file called: mysql-deployment.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysqldb-secrets
              key: ROOT_PASSWORD
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc-claim

Execute the below command to create persistent volume:

kubectl apply -f mysql-pvc.yaml

Output:

persistentvolume/mysql-pvc createdpersistentvolumeclaim/mysql-pvc-claim created

Execute the below command to deploy MySQL pod:

kubectl apply -f mysql-deployment.yaml

Output:

service/mysql created

Exposing MySQL as a Service

  1. Create a file called mysql-service.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
spec:
  selector:
    app: mysql
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306
  1. Execute the below command to create a service for MySQL:
kubectl apply -f mysql-service.yaml

Output:

service/mysql-service created
  1. To confirm if the service is created successfully, execute the below command:
kubectl get svc

Output:

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
echo1           ClusterIP   10.245.179.199   <none>        80/TCP     6h4m
echo2           ClusterIP   10.245.58.44     <none>        80/TCP     6h2m
kubernetes      ClusterIP   10.245.0.1       <none>        443/TCP    6h33m
mysql           ClusterIP   None             <none>        3306/TCP   4m57s
mysql-service   ClusterIP   10.245.159.76    <none>        3306/TCP   36s

The post How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/feed/ 0
Working With Kubernetes Services https://blog.tekspace.io/working-with-kubernetes-services/ https://blog.tekspace.io/working-with-kubernetes-services/#respond Mon, 26 Feb 2018 06:39:55 +0000 https://blog.tekspace.io/index.php/2018/02/26/working-with-kubernetes-services/ Kubernetes services allow pods to be able to communicate outside Kubernetes cluster. There are three types of services as follows: ClusterIPClusterIP is a default internal cluster IP address used when creating a service internal. This cluster IP is only accessible within the cluster and cannot be reached from outside network. When you do kubectl get […]

The post Working With Kubernetes Services appeared first on TEKSpace Blog.

]]>
Kubernetes services allow pods to be able to communicate outside Kubernetes cluster. There are three types of services as follows:

ClusterIP
ClusterIP is a default internal cluster IP address used when creating a service internal. This cluster IP is only accessible within the cluster and cannot be reached from outside network. When you do kubectl get services, you will see CLUSTER-IP in the header and that is the default IP address created for every service. Regardless of what service type it is.

NodePort
NodePort exposes port on all worker nodes in Kubernetes cluster. You can define the port manually or let Kubernetes cluster create one for you dynamically. NodePorts are exposed to each worker node and must be unique for each service.

Load balancer
Load balancer is another layer above NodePort. When creating load balancer service, it first creates NodePort and external load balancer virtual IP. Load balancers are specific to cloud providers, and can only be implemented on Azure, GCS, AWS, OpenStack, and OpenSwift.

Managing Kubernetes Services

List Services

The below command will list all the services in default namespace.

kubectl get services

Output:

[rahil@k8s-master-node ~]$ kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   6h

The below command will list all the services from all namespaces.

kubectl get services --all-namespaces

Output:

[rahil@k8s-master-node ~]$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP         6h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   6h

The post Working With Kubernetes Services appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/working-with-kubernetes-services/feed/ 0
Kubernetes Command line https://blog.tekspace.io/kubernetes-command-line/ https://blog.tekspace.io/kubernetes-command-line/#respond Mon, 26 Feb 2018 03:38:31 +0000 https://blog.tekspace.io/index.php/2018/02/25/kubernetes-command-line/ To manage a Kubernetes cluster, you need to know some basic commands to be able to manage the cluster. Below are basic command lines to manage pods, nodes, services, etc. Pods kubectl get pods is a command-line tool used in Kubernetes, a popular container orchestration platform, to retrieve information about pods that are running within […]

The post Kubernetes Command line appeared first on TEKSpace Blog.

]]>
To manage a Kubernetes cluster, you need to know some basic commands to be able to manage the cluster. Below are basic command lines to manage pods, nodes, services, etc.

Pods

kubectl get pods is a command-line tool used in Kubernetes, a popular container orchestration platform, to retrieve information about pods that are running within a cluster. Pods are the smallest deployable units in Kubernetes and can contain one or more containers.

When you run the kubectl get pods command, it queries the Kubernetes API server for information about the pods within the current context or specified namespace. The output of this command will typically display a table containing various columns that provide information about each pod. The columns might include:

  1. NAME: The name of the pod.
  2. READY: This column shows the number of containers in the pod that are ready out of the total number of containers.
  3. STATUS: The current status of the pod, which can be “Running,” “Pending,” “Succeeded,” “Failed,” or “Unknown.”
  4. RESTARTS: The number of times the containers in the pod have been restarted.
  5. AGE: The amount of time that the pod has been running since its creation.
  6. IP: The IP address assigned to the pod within the cluster network.
  7. NODE: The name of the node where the pod is scheduled to run.

Here’s an example output of the kubectl get pods command:

kubectl get pods

Output:

[rahil@k8s-master-node ~]$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-7587c6fdb6-dz7tj   1/1       Running   0          29m

The below command will watch for events and provide you status on console.

kubectl get pods -w

Output:

[rahil@k8s-master-node ~]$ kubectl get pods -w
NAME                     READY     STATUS              RESTARTS   AGE
nginx-7587c6fdb6-dz7tj   0/1       ContainerCreating   0          20s
nginx-7587c6fdb6-dz7tj   1/1       Running   0         58s

If you like to see more details on individual pods, you can use kubectl describe.

kubectl describe pods <pod_name>

Output:

[rahil@k8s-master-node ~]$ kubectl describe pods nginx
Name:           nginx-7587c6fdb6-dz7tj
Namespace:      default
Node:           k8s-worker-node-1/192.168.0.226
Start Time:     Sun, 25 Feb 2018 17:06:46 -0500
Labels:         pod-template-hash=3143729862
                run=nginx
Annotations:    <none>
Status:         Running
IP:             10.244.1.2
Controlled By:  ReplicaSet/nginx-7587c6fdb6
Containers:
  nginx:
    Container ID:   docker://22a5e181b9c7b56351ccdc9ed41c1f6dfd776e56d45e464399d9a92479657a18
    Image:          nginx
    Image ID:       docker-pullable://docker.io/nginx@sha256:4771d09578c7c6a65299e110b3ee1c0a2592f5ea2618d23e4ffe7a4cab1ce5de
    Port:           80/TCP
    State:          Running
      Started:      Sun, 25 Feb 2018 17:07:42 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jpflm (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  default-token-jpflm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jpflm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                        Message
  ----    ------                 ----  ----                        -------
  Normal  Scheduled              6m    default-scheduler           Successfully assigned nginx-7587c6fdb6-dz7tj to k8s-worker-node-1
  Normal  SuccessfulMountVolume  6m    kubelet, k8s-worker-node-1  MountVolume.SetUp succeeded for volume "default-token-jpflm"
  Normal  Pulling                6m    kubelet, k8s-worker-node-1  pulling image "nginx"
  Normal  Pulled                 6m    kubelet, k8s-worker-node-1  Successfully pulled image "nginx"
  Normal  Created                6m    kubelet, k8s-worker-node-1  Created container
  Normal  Started                5m    kubelet, k8s-worker-node-1  Started container

Nodes

The below command will get all the nodes.

kubectl get nodes

Output:

[rahil@k8s-master-node ~]$ kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
k8s-master-node     Ready     master    3h        v1.9.3
k8s-worker-node-1   Ready     <none>    2h        v1.9.3

The below command will provide you with all the information about nodes in table view mode.

kubectl get nodes -o wide

Output:

[rahil@k8s-master-node ~]$ kubectl get nodes -o wide
NAME                STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-master-node     Ready     master    3h        v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6
k8s-worker-node-1   Ready     <none>    2h        v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6

The below command will provide you with detail about a node.

kubectl describe nodes <node_name>

Output:

[rahil@k8s-master-node ~]$ kubectl describe nodes k8s-worker-node-1
Name:               k8s-worker-node-1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-worker-node-1
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"d2:bd:02:7a:cb:65"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=192.168.0.226
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Sun, 25 Feb 2018 14:42:46 -0500
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:44:06 -0500   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.0.226
  Hostname:    k8s-worker-node-1
Capacity:
 cpu:     2
 memory:  916556Ki
 pods:    110
Allocatable:
 cpu:     2
 memory:  814156Ki
 pods:    110
System Info:
 Machine ID:                 46a73b56be064f2fb8d81ac54f2e0349
 System UUID:                8601D8C9-C1D9-2C4B-B6C9-2A3D2A598E4A
 Boot ID:                    963f0791-c81c-4bb1-aecd-6bd079abea67
 Kernel Version:             3.10.0-693.17.1.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.9.3
 Kube-Proxy Version:         v1.9.3
PodCIDR:                     10.244.1.0/24
ExternalID:                  k8s-worker-node-1
Non-terminated Pods:         (3 in total)
  Namespace                  Name                      CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                      ------------  ----------  ---------------  -------------
  default                    nginx-7587c6fdb6-dz7tj    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-flannel-ds-wtdz4     100m (5%)     100m (5%)   50Mi (6%)        50Mi (6%)
  kube-system                kube-proxy-m2hfm          0 (0%)        0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  100m (5%)     100m (5%)   50Mi (6%)        50Mi (6%)
Events:         <none>

Services

The below command will give you all the available services in default namespace.

kubectl get services

Output:

[rahil@k8s-master-node ~]$ kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3h

The below command will give you all the services in all namespaces.

kubectl get services --all-namespaces

Output:

[rahil@k8s-master-node ~]$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP         3h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   3h

The post Kubernetes Command line appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/kubernetes-command-line/feed/ 0