Kubernetes Archives - TEKSpace Blog https://blog.tekspace.io/tag/kubernetes/ Tech tutorials for Linux, Kubernetes, PowerShell, and Azure Tue, 19 Mar 2024 19:59:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://blog.tekspace.io/wp-content/uploads/2023/09/cropped-Tekspace-logo-icon-32x32.png Kubernetes Archives - TEKSpace Blog https://blog.tekspace.io/tag/kubernetes/ 32 32 How to create docker registry credentials using kubectl https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/ https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/#respond Tue, 19 Mar 2024 19:55:55 +0000 https://blog.tekspace.io/?p=1839 Dive into our comprehensive guide on seamlessly creating Docker registry credentials with kubectl. Whether you're a beginner or an experienced Kubernetes administrator, this article demystifies the process of securely managing Docker registry access. Learn the step-by-step method to generate and update your regcred secret for Docker registry authentication, ensuring your Kubernetes deployments can pull images without a hitch. Perfect for DevOps professionals and developers alike, this tutorial not only simplifies Kubernetes secrets management but also introduces best practices to maintain continuous access to private Docker images. Enhance your Kubernetes skills today and keep your containerized applications running smoothly.

The post How to create docker registry credentials using kubectl appeared first on TEKSpace Blog.

]]>
Updating a Docker registry secret (often named regcred in Kubernetes environments) with new credentials can be essential for workflows that need access to private registries for pulling images. This process involves creating a new secret with the updated credentials and then patching or updating the deployments or pods that use this secret.

Here’s a step-by-step guide to do it:

Step 1: Create a New Secret with Updated Credentials

  1. Log in to Docker Registry: Before updating the secret, ensure you’re logged into the Docker registry from your command line interface so that Kubernetes can access it.
  2. Create or Update the Secret: Use the kubectl create secret command to create a new secret or update an existing one with your Docker credentials. If you’re updating an existing secret, you might need to delete the old secret first. To create a new secret (or replace an existing one)
kubectl create secret docker-registry regcred \
  --docker-server=<YOUR_REGISTRY_SERVER> \ # The URL of your Docker registry
  --docker-username=<YOUR_USERNAME> \ # Your Docker registry username
  --docker-password=<YOUR_PASSWORD> \ # Your Docker registry password
  --docker-email=<YOUR_EMAIL> \ # Your Docker registry email
  --namespace=<NAMESPACE> \ # The Kubernetes namespace where the secret will be used
  --dry-run=client -o yaml | kubectl apply -f -

Replace <YOUR_REGISTRY_SERVER>, <YOUR_USERNAME>, <YOUR_PASSWORD>, <YOUR_EMAIL>, and <NAMESPACE> with your Docker registry details and the appropriate namespace. The --dry-run=client -o yaml | kubectl apply -f - part generates the secret definition and applies it to your cluster, effectively updating the secret if it already exists.

Step 2: Update Deployments or Pods to Use the New Secret

If you’ve created a new secret with a different name, you’ll need to update your deployment or pod specifications to reference the new secret name. This step is unnecessary if you’ve updated an existing secret.

  1. Edit Deployment or Pod Specification: Locate your deployment or pod definition files (YAML files) and update the imagePullSecrets section to reference the new secret name if it has changed.
  2. Apply the Changes: Use kubectl apply -f <deployment-or-pod-file>.yaml to apply the changes to your cluster.

Step 3: Verify the Update

Ensure that your deployments or pods can successfully pull images using the updated credentials.

  1. Check Pod Status: Use kubectl get pods to check the status of your pods. Ensure they are running and not stuck in a ImagePullBackOff or similar error status due to authentication issues.
  2. Check Logs: For further verification, check the logs of your pods or deployments to ensure there are no errors related to pulling images from the Docker registry. You can use kubectl logs <pod-name> to view logs.

This method ensures that your Kubernetes deployments can continue to pull images from private registries without interruption, using the updated credentials.

The post How to create docker registry credentials using kubectl appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/feed/ 0
Deploying Kubernetes Dashboard in K3S Cluster https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/ https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/#respond Wed, 30 Aug 2023 15:02:19 +0000 https://blog.tekspace.io/index.php/2020/10/27/deploying-kubernetes-dashboard-in-k3s-cluster/ Get the latest Kubernetes Dashboard and deploy Create service account and role In admin-service user.yaml, enter the following values: In admin-user-role.yaml, enter the following values: Now apply changes to deploy it to K3S cluster: Expose service as NodePort to access from browser In edit mode change type: ClusterIP to type: NodePort. And save it. Your

The post Deploying Kubernetes Dashboard in K3S Cluster appeared first on TEKSpace Blog.

]]>
Get the latest Kubernetes Dashboard and deploy
GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
sudo k3s kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

Create service account and role

In admin-service user.yaml, enter the following values:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

In admin-user-role.yaml, enter the following values:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Now apply changes to deploy it to K3S cluster:

sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml

Expose service as NodePort to access from browser

sudo k3s kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

In edit mode change type: ClusterIP to type: NodePort. And save it. Your file should look like below:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-10-27T14:32:58Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  resourceVersion: "72638"
  selfLink: /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard
  uid: 8282464c-607f-4e40-ad5c-ee781e83d5f0
spec:
  clusterIP: 10.43.210.41
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30353
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

You can get the port number by executing the below command:

sudo k3s kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.43.210.41   <none>        443:30353/TCP   3h39m

In my Kubernetes cluster, I received a port number 30353 as shown in the above output. In your case, it might be different. This port is exposed on each worker node and master. You can browse one of your worker node IP addresses with port at the end, and you will see a login page.

Get token of a service account

sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token

It will output a token in your console. Grab that token and insert it in to token input box.

My Dashboard link: https://192.168.1.21:30353

All done!

The post Deploying Kubernetes Dashboard in K3S Cluster appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/feed/ 0
Learn about CP command in Linux https://blog.tekspace.io/learn-about-cp-command-in-linux/ https://blog.tekspace.io/learn-about-cp-command-in-linux/#respond Tue, 15 Aug 2023 18:49:33 +0000 https://blog.tekspace.io/?p=1106 The cp command in Linux is used to copy files and directories from one location to another. It stands for “copy” and is an essential tool for managing and duplicating files. The basic syntax of the cp command is as follows: Here, source represents the file or directory you want to copy, and destination is

The post Learn about CP command in Linux appeared first on TEKSpace Blog.

]]>
The cp command in Linux is used to copy files and directories from one location to another. It stands for “copy” and is an essential tool for managing and duplicating files. The basic syntax of the cp command is as follows:

cp [options] source destination

Here, source represents the file or directory you want to copy, and destination is where you want to copy it to. Here’s a detailed explanation of the command and its options:

Options:

  • -r or --recursive: This option is used when you want to copy directories and their contents recursively. Without this option, the cp command will not copy directories.
  • -i or --interactive: This option prompts you before overwriting an existing destination file. It’s useful to prevent accidental overwrites.
  • -u or --update: This option copies only when the source file is newer than the destination file or when the destination file is missing.
  • -v or --verbose: This option displays detailed information about the files being copied, showing each file as it’s copied.
  • -p or --preserve: This option preserves the original file attributes such as permissions, timestamps, and ownership when copying.
  • -d or --no-dereference: This option is used to avoid dereferencing symbolic links; it copies symbolic links themselves instead of their targets.
  • --parents: This option preserves the directory structure when copying files, creating any necessary parent directories in the destination.
  • --backup: This option makes a backup copy of each existing destination file before overwriting it.

Source and Destination:

  • source: This is the path to the file or directory you want to copy. It can be either an absolute path or a relative path.
  • destination: This is the path to the location where you want to copy the source. It can also be either an absolute path or a relative path. If the destination is a directory, the source file or directory will be copied into it.

Examples:

Copy a file to a different location:

Copy a directory and its contents recursively:

cp -r directory/ /path/to/destination/

Copy multiple files to a directory:

cp file1.txt file2.txt /path/to/destination/

Copy with preserving attributes and prompting for overwrite:

cp -i -p file.txt /path/to/destination/

Copy a file, preserving directory structure:

cp --parents file.txt /path/to/destination/

Copy and create backup copies of existing files:

cp --backup=numbered file.txt /path/to/destination/

Remember that improper use of the cp command can lead to data loss, so be cautious, especially when using options like -i and -u. Always double-check your commands before pressing Enter.

Additional Examples:

Copy a File to a Different Location:

cp file.txt /path/to/destination/

Copy Multiple Files to a Directory:

cp file1.txt file2.txt /path/to/destination/

Copy Files and Preserve Attributes:

cp -p file.txt /path/to/destination/

Copy a File and Prompt Before Overwriting:

cp -i file.txt /path/to/destination/

Copy Only Newer Files:

cp -u newer.txt /path/to/destination/

Copy Files Verbosely (Display Detailed Information):

cp -v file1.txt file2.txt /path/to/destination/

Copy Files and Create Backup Copies:

cp --backup=numbered file.txt /path/to/destination/

Copy Symbolic Links (Not Dereferencing):

cp -d symlink.txt /path/to/destination/

Copy a File and Preserve Parent Directories:

cp --parents dir/file.txt /path/to/destination/

Copy a Directory and Its Contents to a New Directory:

cp -r directory/ new_directory/

Copy Files Using Wildcards:

cp *.txt /path/to/destination/

Copy Hidden Files:

cp -r source_directory/. destination_directory/

Copy Files Between Remote Servers (using SCP):

cp source_file remote_username@remote_host:/path/to/destination/

Copy Files Between Remote Servers (using SSH and tar):

tar cf - source_directory/ | ssh remote_host "cd /path/to/destination/ && tar xf -"

Copy Files Using Absolute Paths:

cp /absolute/path/to/source/file.txt /absolute/path/to/destination/

Copy Files with Progress Indicator (using rsync):

rsync -av --progress source/ /path/to/destination/

These examples should provide you with a variety of scenarios where the cp command can be used to copy files and directories in Linux. Remember to adjust the paths and options according to your specific needs.

The post Learn about CP command in Linux appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/learn-about-cp-command-in-linux/feed/ 0
Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/ https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/#respond Mon, 26 Oct 2020 21:35:27 +0000 https://blog.tekspace.io/index.php/2020/10/26/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/ Setup K3S Cluster By default, Rancher K3S comes with Traefik 1.7. We will setup K3S without Traefik ingress in this tutorial. Execute the above command on master node 2 to setup HA.Validate cluster setup: Make sure you have HA Proxy Setup: We will use above command output value to join worker nodes: MetalLB Setup Create

The post Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal appeared first on TEKSpace Blog.

]]>
Setup K3S Cluster

By default, Rancher K3S comes with Traefik 1.7. We will setup K3S without Traefik ingress in this tutorial.

  1. Execute below command on master node 1.
curl -sfL https://get.k3s.io | sh -s - server   --datastore-endpoint="mysql://user:pass@tcp(ip_address:3306)/databasename" --disable traefik --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 192.168.1.2 --tls-san k3s.home.lab

Execute the above command on master node 2 to setup HA.
Validate cluster setup:

$ sudo kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k3s-master-1   Ready    master   3m9s   v1.18.9+k3s1

Make sure you have HA Proxy Setup:

##########################################################
#               Kubernetes AP ILB
##########################################################
frontend kubernetes-frontend
    bind 192.168.1.2:6443
    mode tcp
    option tcplog
    default_backend kubernetes-backend

backend kubernetes-backend
    mode tcp
    option tcp-check
    balance roundrobin
    server k3s-master-1 192.168.1.10:6443 check fall 3 rise 2
    server k3s-master-2 192.168.1.20:6443 check fall 3 rise 2
  1. Join worker nodes to K3S Cluster
    Get node token from one of the master node by executing below command:
sudo cat /var/lib/rancher/k3s/server/node-tokenK105c8c5de8deac516ebgd454r45547481d70625ee3e5200acdbe8ea071191debd4::server:gd5de354807077fde4259fd9632ea045454

We will use above command output value to join worker nodes:

curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.2:6443 K3S_TOKEN={{USE_TOKEN_FROM_ABOVE}} sh -
  1. Validate K3S cluster state:
NAME                STATUS   ROLES    AGE     VERSION
k3s-master-1        Ready    master   15m     v1.18.9+k3s1
k3s-worker-node-1   Ready    <none>   3m44s   v1.18.9+k3s1
k3s-worker-node-2   Ready    <none>   2m52s   v1.18.9+k3s1
k3s-master-2        Ready    master   11m     v1.18.9+k3s1

MetalLB Setup

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Create a file called metallb-config.yaml and enter the following values:

apiVersion: v1
kind: ConfigMap
metadata:
    namespace: metallb-system
    name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

Apply changes:

sudo kubectl apply -f metallb-config.yaml

Deploy sample application with service

kubectl create deploy nginx --image nginxkubectl expose deploy nginx --port 80

Check status:

$ kubectl get svc,pods
NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/kubernetes                                ClusterIP      10.43.0.1       <none>          443/TCP                      44m
service/nginx                                     ClusterIP      10.43.14.116    <none>          80/TCP                       31s

NAME                                                 READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-25lpb                            1/1     Running   0          59s

Nginx Ingress setup

In this tutorial, I will be using helm to setup Nginx ingress controller.

  1. Execute the following commands to setup Nginx ingress from client machine with helm, kubectl configured:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install home ingress-nginx/ingress-nginx

Check Ingress controller status:

kubectl --namespace default get services -o wide -w home-ingress-nginx-controller
  1. Setup Ingress by creating home-ingress.yaml and add below values. Replace example.io
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: home-ingress
  namespace: default
spec:
  rules:
    - host: example.io
      http:
        paths:
          - backend:
              serviceName: nginx
              servicePort: 80
            path: /

Execute command to apply:

 kubectl apply -f home-ingress.yaml

Check Status on Ingress with `kubectl get ing` command:

$ kubectl get ing
NAME           CLASS    HOSTS           ADDRESS         PORTS   AGE
home-ingress   <none>   example.io   192.168.1.240   80      8m26s

Letsencrypt setup

  1. Execute below command to create namespaces, pods, and other related configurations:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.0.3/cert-manager.yaml

Once above completes lets validate pods status.
2. Validate setup:

$ kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-cainjector-76c9d55b6f-cp2jf   1/1     Running   0          39s
cert-manager-79c5f9946-qkfzv               1/1     Running   0          38s
cert-manager-webhook-6d4c5c44bb-4mdgc      1/1     Running   0          38s
  1. Setup staging environment by applying the changes below. Update email:
vi staging_issure.yaml

and paste the below values and save the file:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: letsencrypt-staging
spec:
 acme:
   # The ACME server URL
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: john@example.com
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply changes:

kubectl apply -f staging_issure.yaml

We will apply production issure later in this tutorial. We should first test SSL settings prior to making changes to use production certificates.

SSL setup with LetsEncrypt and Nginx Ingress

Before proceeding here, please make sure your dns is setup correctly from your cloud provider or in your home lab to allow traffic from the internet. LetsEncrypt uses HTTP validation to issue certificates, and it needs to reach the correct dns alias from where the cert request has been initiated.

Create new ingress file as shown below:

vi home-ingress-ssl.yaml

Copy and paste in above file:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt-staging
  name: home-ingress
  namespace: default
spec:
  tls:
  - hosts:
    - example.io
    secretName: home-example-io-tls
  rules:
  - host: example.io
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

Apply changes:

kubectl apply -f home-ingress-ssl.yaml

Validate certificate creation:

kubectl describe certificate
Spec:
  Dns Names:
    example.io
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       Issuer
    Name:       letsencrypt-staging
  Secret Name:  home-example-io-tls
Status:
  Conditions:
    Last Transition Time:        2020-10-26T20:19:15Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      False
    Type:                        Ready
    Last Transition Time:        2020-10-26T20:19:18Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      True
    Type:                        Issuing
  Next Private Key Secret Name:  home-example-io-tls-76dqg
Events:
  Type    Reason     Age   From          Message
  ----    ------     ----  ----          -------
  Normal  Issuing    10s   cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  8s    cert-manager  Stored new private key in temporary Secret resource "home-example-io-tls-76dqg"
  Normal  Requested  4s    cert-manager  Created new CertificateRequest resource "home-example-io-tls-h98zf"

Now you can browse your dns URL and validate your certificate. If you see something like below, that means your LetsEncrypt certificate management has been setup successfully.

Set production issure to get valid certificate

Create production issure:

vi production-issure.yaml

Copy and paste the below values into the above file. Update email:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: letsencrypt-prod
spec:
 acme:
   # The ACME server URL
   server: https://acme-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: user@example.com
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply changes:

kubectl apply -f production-issure.yaml

Update home-ingress-ssl.yaml file you created earlier with below values:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt-prod
  name: home-ingress
  namespace: default
spec:
  tls:
  - hosts:
    - example.io
    secretName: home-example-io-tls
  rules:
  - host: example.io
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

Apply changes:

kubectl apply -f home-ingress-ssl.yaml

Validate changes:

NOTE: Give it some time as it may take 2-5 mins to get the cert request to complete.

kubectl describe certificate

Your output should look something like below to get a valid certificate.

Spec:
  Dns Names:
    example.io
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       Issuer
    Name:       letsencrypt-prod
  Secret Name:  home-example-io-tls
Status:
  Conditions:
    Last Transition Time:  2020-10-26T20:43:35Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2021-01-24T19:43:25Z
  Not Before:              2020-10-26T19:43:25Z
  Renewal Time:            2020-12-25T19:43:25Z
  Revision:                2
Events:
  Type    Reason     Age                From          Message
  ----    ------     ----               ----          -------
  Normal  Issuing    24m                cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  24m                cert-manager  Stored new private key in temporary Secret resource "home-example-io-tls-76dqg"
  Normal  Requested  24m                cert-manager  Created new CertificateRequest resource "home-example-io-tls-h98zf"
  Normal  Issuing    105s               cert-manager  Issuing certificate as Secret was previously issued by Issuer.cert-manager.io/letsencrypt-staging
  Normal  Reused     103s               cert-manager  Reusing private key stored in existing Secret resource "home-example-io-tls"
  Normal  Requested  100s               cert-manager  Created new CertificateRequest resource "home-example-io-tls-ccxgf"
  Normal  Issuing    30s (x2 over 23m)  cert-manager  The certificate has been successfully issued

Browse your application and check for a valid certificate. If it looks something like below, that means you have successfully requested a valid certificate from LetsEncrypt certificate authority.

The post Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/feed/ 0
Raspberry Pi 4 K3S cluster setup https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/ https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/#respond Sun, 25 Oct 2020 17:22:40 +0000 https://blog.tekspace.io/index.php/2020/10/25/raspberry-pi-4-k3s-cluster-setup/ In this tutorial, I will show you how to set up a lightweight Kubernetes cluster using rancher k3s. My current Raspberry Pi 4 configuration: Hostname RAM CPU Disk IP Address k3s-master-1 8 GB 4 64 GB 192.168.1.10 k3s-worker-node-1 8 GB 4 64 GB 192.168.1.11 k3s-worker-node-2 8 GB 4 64 GB 192.168.1.12 Prerequisite set values as shown below: Apply changes by executing

The post Raspberry Pi 4 K3S cluster setup appeared first on TEKSpace Blog.

]]>
In this tutorial, I will show you how to set up a lightweight Kubernetes cluster using rancher k3s.

My current Raspberry Pi 4 configuration:

HostnameRAMCPUDiskIP Address
k3s-master-18 GB464 GB192.168.1.10
k3s-worker-node-18 GB464 GB192.168.1.11
k3s-worker-node-28 GB464 GB192.168.1.12

Prerequisite

  1. Ubuntu OS 20.04. Follow guide on here.
  2. Assign static IP address to PI.
sudo vi /etc/netplan/50-cloud-init.yaml

set values as shown below:

  • dhcp set value from true to no.
  • As shown below added 5-9 lines below dhcp, change IP address as per your network setup and save the file.
network:
    ethernets:
        eth0:
            dhcp4: no
            addresses:
              - 192.168.1.10/24
            gateway4: 192.168.1.1
            nameservers:
              addresses: [192.168.1.1, 8.8.8.8, 1.1.1.1]
            match:
                driver: bcmgenet smsc95xx lan78xx
            optional: true
            set-name: eth0
    version: 2

Apply changes by executing the below command to set static IP:

sudo netplan apply

NOTE: Once you apply changes, your ssh session will be interrupted. Make sure to reconnect using ssh.

  1. Open sudo vi /boot/firmware/cmdline.txt in edit mode and enter below value and apply this same change on each master and work nodes:
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

Mine looks like below.

net.ifnames=0 dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
  1. reboot
sudo init 6

Getting started with HA setup for Master nodes

Before we proceed with the tutorial, please make sure you have a MySQL database configured. We will use MySQL to set up HA.

On each master node, execute the below command to set up k3s cluster.

curl -sfL https://get.k3s.io | sh -s - server   --datastore-endpoint="mysql://username:password@tcp(192.168.1.50:3306)/kdb"

Once the installation is completed, execute the below command to see the status of the k3s service.

Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2020-10-25 04:38:00 UTC; 32s ago
       Docs: https://k3s.io
    Process: 1696 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 1720 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 1725 (k3s-server)
      Tasks: 13
     Memory: 425.1M
     CGroup: /system.slice/k3s.service
             ├─1725 /usr/local/bin/k3s server --datastore-endpoint=mysql://username:password@tcp(192.168.1.50:3306)/kdb
             └─1865 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd -->

Joining worker nodes to k3s cluster

  1. First, we need to get a token from the master node. Go to your master node and enter the below command to get token:
sudo cat /var/lib/rancher/k3s/server/node-token
  1. Second, we are ready to join all the nodes to the cluster by entering the below command and replacing the XXX token with your own:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 K3S_TOKEN=XXX sh -
  1. Check the status of your worker node status from master node by executing the below command:
sudo kubectl get nodes
ubuntu@k3s-master-1:~$ sudo kubectl get nodes
NAME                STATUS   ROLES    AGE   VERSION
k3s-worker-node-2   Ready    <none>   28m   v1.18.9+k3s1
k3s-master-1        Ready    master   9h    v1.18.9+k3s1
k3s-worker-node-1   Ready    <none>   32m   v1.18.9+k3s1
k3s-master-2        Ready    master   9h    v1.18.9+k3s1

The post Raspberry Pi 4 K3S cluster setup appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/feed/ 0
Rancher Kubernetes Single Node Setup https://blog.tekspace.io/rancher-kubernetes-single-node-setup/ https://blog.tekspace.io/rancher-kubernetes-single-node-setup/#respond Sun, 18 Oct 2020 02:48:13 +0000 https://blog.tekspace.io/index.php/2020/10/18/rancher-kubernetes-single-node-setup/ Pre-req Download rke package and set executable permissions RKE Cluster setup First, we must setup rke cluster configuration file to deploy it to rke node where the cluster will be setup. Continue with interactive configurations to setup single node cluster. Run the below command to setup rke cluster Output: Connecting to Kubernetes cluster HELM Installation

The post Rancher Kubernetes Single Node Setup appeared first on TEKSpace Blog.

]]>
Pre-req
  1. VM requirements
    • One Ubuntu 20.04 VM node where RKE Cluster will be running.
    • One Ubuntu 20.04 host node where RKE CLI will be configured to use to setup cluster.
  2. Disable swap and firewall
sudo ufw disable
sudo swapoff -a; sudo sed -i '/swap/d' /etc/fstab
  1. Update sysctl settings
sudo cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
  1. Docker installed on all nodes
    • Login to Ubuntu VM with your sudo account.
    • Execute the following commands:
sudo apt-get update
sudo apt-get upgrade
sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh
  1. New User and add to Docker group
sudo adduser rkeuser
sudo passwd rkeuser >/dev/null 2>&1
sudo usermod -aG docker rkeuser
  1. SSH Key Gen and copy keys
ssh-keygen -t rsa -b 2048
ssh-copy-id rkeuser@192.168.1.188

Download rke package and set executable permissions

wget https://github.com/rancher/rke/releases/download/v1.1.0/rke_linux-amd64
sudo cp rke_linux-amd64 /usr/local/bin/rke
sudo chmod +x /usr/local/bin/rke

RKE Cluster setup

First, we must setup rke cluster configuration file to deploy it to rke node where the cluster will be setup. Continue with interactive configurations to setup single node cluster.

rke config

Run the below command to setup rke cluster

rke up

Output:

INFO[0000] Running RKE version: v1.1.9
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [192.168.1.188] 
INFO[0000] Checking if container [cluster-state-deployer] is running on host [192.168.1.188], try #1 
INFO[0000] Pulling image [rancher/rke-tools:v0.1.65] on host [192.168.1.188], try #1 
INFO[0005] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0005] Starting container [cluster-state-deployer] on host [192.168.1.188], try #1 
INFO[0005] [state] Successfully started [cluster-state-deployer] container on host [192.168.1.188] 
INFO[0005] [certificates] Generating CA kubernetes certificates 
INFO[0005] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0006] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates 
INFO[0006] [certificates] Generating Kubernetes API server certificates
INFO[0006] [certificates] Generating Service account token key 
INFO[0006] [certificates] Generating Kube Controller certificates
INFO[0006] [certificates] Generating Kube Scheduler certificates 
INFO[0006] [certificates] Generating Kube Proxy certificates
INFO[0006] [certificates] Generating Node certificate
INFO[0006] [certificates] Generating admin certificates and kubeconfig
INFO[0006] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0006] [certificates] Generating kube-etcd-192-168-1-188 certificate and key
INFO[0006] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0006] Building Kubernetes cluster
INFO[0006] [dialer] Setup tunnel for host [192.168.1.188]
INFO[0007] [network] Deploying port listener containers 
INFO[0007] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0007] Starting container [rke-etcd-port-listener] on host [192.168.1.188], try #1 
INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.1.188] 
INFO[0007] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0007] Starting container [rke-cp-port-listener] on host [192.168.1.188], try #1 
INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [192.168.1.188] 
INFO[0008] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0008] Starting container [rke-worker-port-listener] on host [192.168.1.188], try #1 
INFO[0008] [network] Successfully started [rke-worker-port-listener] container on host [192.168.1.188] 
INFO[0008] [network] Port listener containers deployed successfully
INFO[0008] [network] Running control plane -> etcd port checks
INFO[0008] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0008] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Running control plane -> worker port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0009] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Running workers -> control plane port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0009] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Checking KubeAPI port Control Plane hosts
INFO[0009] [network] Removing port listener containers  
INFO[0009] Removing container [rke-etcd-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.1.188] 
INFO[0010] Removing container [rke-cp-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.1.188]
INFO[0010] Removing container [rke-worker-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.1.188]
INFO[0010] [network] Port listener containers removed successfully
INFO[0010] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0010] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1
INFO[0010] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0010] Starting container [cert-deployer] on host [192.168.1.188], try #1 
INFO[0010] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1 
INFO[0015] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1 
INFO[0015] Removing container [cert-deployer] on host [192.168.1.188], try #1 
INFO[0015] [reconcile] Rebuilding and updating local kube config 
INFO[0015] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0015] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0015] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.1.188]
INFO[0015] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0016] Starting container [file-deployer] on host [192.168.1.188], try #1
INFO[0016] Successfully started [file-deployer] container on host [192.168.1.188] 
INFO[0016] Waiting for [file-deployer] container to exit on host [192.168.1.188]
INFO[0016] Waiting for [file-deployer] container to exit on host [192.168.1.188]
INFO[0016] Container [file-deployer] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0017] Waiting for [file-deployer] container to exit on host [192.168.1.188] 
INFO[0017] Removing container [file-deployer] on host [192.168.1.188], try #1
INFO[0017] [remove/file-deployer] Successfully removed container on host [192.168.1.188] 
INFO[0017] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes
INFO[0017] [reconcile] Reconciling cluster state
INFO[0017] [reconcile] This is newly generated cluster
INFO[0017] Pre-pulling kubernetes images
INFO[0017] Pulling image [rancher/hyperkube:v1.18.9-rancher1] on host [192.168.1.188], try #1
INFO[0047] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0047] Kubernetes images pulled successfully
INFO[0047] [etcd] Building up etcd plane..
INFO[0047] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0047] Starting container [etcd-fix-perm] on host [192.168.1.188], try #1 
INFO[0047] Successfully started [etcd-fix-perm] container on host [192.168.1.188] 
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188]
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188]
INFO[0047] Container [etcd-fix-perm] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0048] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188] 
INFO[0048] Removing container [etcd-fix-perm] on host [192.168.1.188], try #1
INFO[0048] [remove/etcd-fix-perm] Successfully removed container on host [192.168.1.188] 
INFO[0048] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.1.188], try #1
INFO[0051] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.1.188] 
INFO[0051] Starting container [etcd] on host [192.168.1.188], try #1 
INFO[0051] [etcd] Successfully started [etcd] container on host [192.168.1.188] 
INFO[0051] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.1.188]
INFO[0051] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0051] Starting container [etcd-rolling-snapshots] on host [192.168.1.188], try #1 
INFO[0051] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.1.188] 
INFO[0056] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0057] Starting container [rke-bundle-cert] on host [192.168.1.188], try #1 
INFO[0057] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.1.188] 
INFO[0057] Waiting for [rke-bundle-cert] container to exit on host [192.168.1.188]
INFO[0057] Container [rke-bundle-cert] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0058] Waiting for [rke-bundle-cert] container to exit on host [192.168.1.188] 
INFO[0058] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.1.188]
INFO[0058] Removing container [rke-bundle-cert] on host [192.168.1.188], try #1
INFO[0058] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0058] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0059] [etcd] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0059] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0059] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0059] [etcd] Successfully started etcd plane.. Checking etcd cluster health
INFO[0059] [controlplane] Building up Controller Plane.. 
INFO[0059] Checking if container [service-sidekick] is running on host [192.168.1.188], try #1
INFO[0059] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0059] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0059] Starting container [kube-apiserver] on host [192.168.1.188], try #1 
INFO[0059] [controlplane] Successfully started [kube-apiserver] container on host [192.168.1.188] 
INFO[0059] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.1.188]
INFO[0067] [healthcheck] service [kube-apiserver] on host [192.168.1.188] is healthy 
INFO[0067] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0068] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0068] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0068] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0068] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0068] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0068] Starting container [kube-controller-manager] on host [192.168.1.188], try #1 
INFO[0068] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.1.188] 
INFO[0068] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.1.188]
INFO[0074] [healthcheck] service [kube-controller-manager] on host [192.168.1.188] is healthy 
INFO[0074] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0074] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0074] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0074] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0075] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0075] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0075] Starting container [kube-scheduler] on host [192.168.1.188], try #1
INFO[0075] [controlplane] Successfully started [kube-scheduler] container on host [192.168.1.188] 
INFO[0075] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.1.188]
INFO[0080] [healthcheck] service [kube-scheduler] on host [192.168.1.188] is healthy 
INFO[0080] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0080] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0081] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0081] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0081] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0081] [controlplane] Successfully started Controller Plane..
INFO[0081] [authz] Creating rke-job-deployer ServiceAccount
INFO[0081] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0081] [authz] Creating system:node ClusterRoleBinding
INFO[0081] [authz] system:node ClusterRoleBinding created successfully
INFO[0081] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
INFO[0081] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
INFO[0081] Successfully Deployed state file at [./cluster.rkestate]
INFO[0081] [state] Saving full cluster state to Kubernetes
INFO[0081] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state 
INFO[0081] [worker] Building up Worker Plane..
INFO[0081] Checking if container [service-sidekick] is running on host [192.168.1.188], try #1
INFO[0081] [sidekick] Sidekick container already created on host [192.168.1.188]
INFO[0081] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0081] Starting container [kubelet] on host [192.168.1.188], try #1 
INFO[0081] [worker] Successfully started [kubelet] container on host [192.168.1.188] 
INFO[0081] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.1.188]
INFO[0092] [healthcheck] service [kubelet] on host [192.168.1.188] is healthy 
INFO[0092] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0092] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0093] [worker] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0093] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0093] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0093] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0093] Starting container [kube-proxy] on host [192.168.1.188], try #1
INFO[0093] [worker] Successfully started [kube-proxy] container on host [192.168.1.188] 
INFO[0093] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.1.188]
INFO[0098] [healthcheck] service [kube-proxy] on host [192.168.1.188] is healthy 
INFO[0098] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0099] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0099] [worker] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0099] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0099] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0099] [worker] Successfully started Worker Plane..
INFO[0099] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0099] Starting container [rke-log-cleaner] on host [192.168.1.188], try #1 
INFO[0099] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.1.188] 
INFO[0099] Removing container [rke-log-cleaner] on host [192.168.1.188], try #1
INFO[0100] [remove/rke-log-cleaner] Successfully removed container on host [192.168.1.188] 
INFO[0100] [sync] Syncing nodes Labels and Taints
INFO[0100] [sync] Successfully synced nodes Labels and Taints
INFO[0100] [network] Setting up network plugin: canal   
INFO[0100] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0100] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0100] [addons] Executing deploy job rke-network-plugin
INFO[0115] [addons] Setting up coredns
INFO[0115] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0115] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0115] [addons] Executing deploy job rke-coredns-addon
INFO[0120] [addons] CoreDNS deployed successfully       
INFO[0120] [dns] DNS provider coredns deployed successfully
INFO[0120] [addons] Setting up Metrics Server
INFO[0120] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0120] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0120] [addons] Executing deploy job rke-metrics-addon
INFO[0130] [addons] Metrics Server deployed successfully 
INFO[0130] [ingress] Setting up nginx ingress controller
INFO[0130] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0130] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0130] [addons] Executing deploy job rke-ingress-controller
INFO[0140] [ingress] ingress controller nginx deployed successfully 
INFO[0140] [addons] Setting up user addons
INFO[0140] [addons] no user addons defined
INFO[0140] Finished building Kubernetes cluster successfully

Connecting to Kubernetes cluster

  1. Download latest kubectl
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
  1. Assign executable permissions
chmod +x ./kubectl
  1. Move file to default executable location
sudo mv ./kubectl /usr/local/bin/kubectl
  1. Check kubectl version
kubectl version --client
  1. Copy rancher exported kube cluster YAML file to $HOME/.kube/config
mkdir -p $HOME/.kube
cp kube_config_cluster.yml $HOME/.kube/config
  1. Connect to Kubernetes cluster and get pods
kubectl get pods -A

HELM Installation

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Setup Rancher in Kubernetes cluster

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v0.15.0

Set cert-manager

Define DNS cert request. You can replace rancher.my.org with your own DNS alias.

helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=rancher.my.org
kubectl -n cattle-system rollout status deploy/rancher

NOTE: Make sure to add rancher.my.org in the host entry of your system if you are working in a lab environment if you don’t have a DNS.

Rancher UI instructions here followed from here.

The post Rancher Kubernetes Single Node Setup appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/rancher-kubernetes-single-node-setup/feed/ 0
How to generate self-signed certificate in Linux https://blog.tekspace.io/how-to-generate-self-signed-certificate-in-linux/ https://blog.tekspace.io/how-to-generate-self-signed-certificate-in-linux/#respond Wed, 30 Sep 2020 20:22:25 +0000 https://blog.tekspace.io/index.php/2020/09/30/how-to-generate-self-signed-certificate-in-linux/ In this tutorial, I will be using CentOS 7 to generate self-signed certificates. You can use any Linux operating system as long as it is Openssl install. To install Openssl follow the below guide: Openssl installation CentOS, Redhat, Fedora: Ubuntu, Debian Generating certificate with password Command: Interactive view: Verify output Generating certificate without password Command:

The post How to generate self-signed certificate in Linux appeared first on TEKSpace Blog.

]]>
In this tutorial, I will be using CentOS 7 to generate self-signed certificates. You can use any Linux operating system as long as it is Openssl install. To install Openssl follow the below guide:

Openssl installation

CentOS, Redhat, Fedora:

sudo yum install openssl

Ubuntu, Debian

sudo apt install openssl

Generating certificate with password

Command:

openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -out example.crt -keyout example.key

Interactive view:

Generating a 4096 bit RSA private key
...............++
................................................................................                                                                                        ....................................................++
writing new private key to 'example.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:Texas
Locality Name (eg, city) [Default City]:Houston
Organization Name (eg, company) [Default Company Ltd]:Example
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server's hostname) []:example.com
Email Address []:JohnSmith@example.com

Verify output

$ ls -l example.*
-rw-rw-r-- 1 test test 2110 Sep 30 20:14 example.crt
-rw-rw-r-- 1 test test 3406 Sep 30 20:14 example.key

Generating certificate without password

Command:

openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -out example1.crt -keyout example1.key -nodes

Interactive view:

Generating a 4096 bit RSA private key
......................................................................................................++
................................................................................................................++
writing new private key to 'example1.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:Texas
Locality Name (eg, city) [Default City]:Houston
Organization Name (eg, company) [Default Company Ltd]:Example
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server's hostname) []:example1.com
Email Address []:JohnSmith@example.com

Verify output

$ ls -l example1.*
-rw-rw-r-- 1 test test 2110 Sep 30 20:40 example1.crt
-rw-rw-r-- 1 test test 3406 Sep 30 20:40 example1.key

The post How to generate self-signed certificate in Linux appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-generate-self-signed-certificate-in-linux/feed/ 0
Managing MySQL database using PHPMyAdmin in Kubernetes https://blog.tekspace.io/managing-mysql-database-using-php/ https://blog.tekspace.io/managing-mysql-database-using-php/#respond Tue, 15 Sep 2020 01:35:49 +0000 https://blog.tekspace.io/index.php/2020/09/15/managing-mysql-database-using-php/ Make sure you are familiar with connecting to a Kubernetes cluster, have the Nginx ingress controller configured with a certificate manager, and have a MySQL database pod deployed. Only then, can you proceed. Follow this guide if you do not have MySQL deployed. Follow this guide to set up Nginx ingress and cert manager. PhpMyAdmin

The post Managing MySQL database using PHPMyAdmin in Kubernetes appeared first on TEKSpace Blog.

]]>

Make sure you are familiar with connecting to a Kubernetes cluster, have the Nginx ingress controller configured with a certificate manager, and have a MySQL database pod deployed. Only then, can you proceed. Follow this guide if you do not have MySQL deployed. Follow this guide to set up Nginx ingress and cert manager.

PhpMyAdmin is a popular open source tool to manage MySQL database server. Learn how to create a deployment and expose it as a service to access PhpMyAdmin from the internet using Nginx ingress controller.

  1. Create a deployment file called phpmyadmin-deployment.yaml and paste the following values:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: phpmyadmin-deployment
  labels:
    app: phpmyadmin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: phpmyadmin
  template:
    metadata:
      labels:
        app: phpmyadmin
    spec:
      containers:
        - name: phpmyadmin
          image: phpmyadmin/phpmyadmin
          ports:
            - containerPort: 80
          env:
            - name: PMA_HOST
              value: mysql-service
            - name: PMA_PORT
              value: "3306"
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secrets
                  key: ROOT_PASSWORD

NOTE: ROOT_PASSWORD value will be consumed from Kubernetes secrets. If you want to learn more about Kubernetes secrets. Follow this guide.

  1. Execute the below command to create a new deployment:
kubectl apply -f phpmyadmin-deployment.yaml

Output:

deployment.apps/phpmyadmin-deployment created

Exposing PhpMyAdmin via Services

  1. Create new file called phpmyadmin-service.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: phpmyadmin-service
spec:
  type: NodePort
  selector:
    app: phpmyadmin
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  1. Execute the below command to create service:
kubectl apply -f phpmyadmin-service.yaml

Output:

service/phpmyadmin-service created

Once you are done with the above configurations, it’s time to exposePhpMyAdminn service via internet.

I use DigitalOcean-managed Kubernetes. I manage my own DNS, and DigitalOcean automatically creates a load balancer for my Nginx ingress controller. Once again if you want to follow this guide, it will be very helpful.

Nginx Ingress configuration

  1. Create a new YAML file called phpmyadmin-ingress.yaml and paste the following values:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - phpmyadmin.example.com
    secretName: echo-tls
  rules:
  - host: mydemo.example.com
    http:
      paths:
      - backend:
          serviceName: phpmyadmin-service
          servicePort: 80
  1. Apply the changes:
kubectl apply -f mydemo-ingress.yaml

The post Managing MySQL database using PHPMyAdmin in Kubernetes appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/managing-mysql-database-using-php/feed/ 0
How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/ https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/#respond Mon, 14 Sep 2020 18:40:34 +0000 https://blog.tekspace.io/index.php/2020/09/14/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/ NOTE: This tutorial assumes you know how to connect to a Kubernetes cluster. Create secrets to securely store MySQL credentials Output: Persistant volume and MySQL deployment Execute the below command to create persistent volume: Output: Execute the below command to deploy MySQL pod: Output: Exposing MySQL as a Service Output: Output:

The post How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster appeared first on TEKSpace Blog.

]]>

NOTE: This tutorial assumes you know how to connect to a Kubernetes cluster.

Create secrets to securely store MySQL credentials

  1. Guide on how to create base 64 encoded values:

    Windows 10 guideLinux guide

  2. Create a new file called: mysql-secret.yaml and paste the value below.
    NOTE: You must first capture the value in base 64 by following the guide in step 1.
---
apiVersion: v1
kind: Secret
metadata:
  name: mysqldb-secrets
type: Opaque
data:
  ROOT_PASSWORD: c3VwZXItc2VjcmV0LXBhc3N3b3JkLWZvci1zcWw= 
  1. Execute the below command to create secrets:
kubectl apply -f mysql-secret.yaml

Output:

secret/mysqldb-secrets created
  1. To see if the secret is created, execute the below command:
kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-jqq69   kubernetes.io/service-account-token   3      6h20m
echo-tls              kubernetes.io/tls                     2      5h19m
mysqldb-secrets       Opaque                                1      42s
  1. To see the description of the secret, execute the below command:
kubectl describe secret mysqldb-secrets
Name:         mysqldb-secrets
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
ROOT_PASSWORD:  29 bytes

Persistant volume and MySQL deployment

  1. Create a persistent volume YAML file called: mysql-pvc.yaml and paste the following values:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pvc
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/mysql-data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage
  1. Create a new deployment YAML file called: mysql-deployment.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysqldb-secrets
              key: ROOT_PASSWORD
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc-claim

Execute the below command to create persistent volume:

kubectl apply -f mysql-pvc.yaml

Output:

persistentvolume/mysql-pvc createdpersistentvolumeclaim/mysql-pvc-claim created

Execute the below command to deploy MySQL pod:

kubectl apply -f mysql-deployment.yaml

Output:

service/mysql created

Exposing MySQL as a Service

  1. Create a file called mysql-service.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
spec:
  selector:
    app: mysql
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306
  1. Execute the below command to create a service for MySQL:
kubectl apply -f mysql-service.yaml

Output:

service/mysql-service created
  1. To confirm if the service is created successfully, execute the below command:
kubectl get svc

Output:

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
echo1           ClusterIP   10.245.179.199   <none>        80/TCP     6h4m
echo2           ClusterIP   10.245.58.44     <none>        80/TCP     6h2m
kubernetes      ClusterIP   10.245.0.1       <none>        443/TCP    6h33m
mysql           ClusterIP   None             <none>        3306/TCP   4m57s
mysql-service   ClusterIP   10.245.159.76    <none>        3306/TCP   36s

The post How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/feed/ 0
Setup Windows Node with Kubernetes 1.14 https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/ https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/#respond Thu, 04 Apr 2019 21:40:30 +0000 https://blog.tekspace.io/index.php/2019/04/04/setup-windows-node-with-kubernetes-1-14/ Kubernetes 1.14 now provides out of the box support for Windows worker nodes to run windows containers with in a Kubernetes cluster. This feature was in preview for a long time, and now it is production ready. This is a wonderful opportunity for most cloud giant companies to start applying a new version of Kubernetes

The post Setup Windows Node with Kubernetes 1.14 appeared first on TEKSpace Blog.

]]>
Kubernetes 1.14 now provides out of the box support for Windows worker nodes to run windows containers with in a Kubernetes cluster. This feature was in preview for a long time, and now it is production ready. This is a wonderful opportunity for most cloud giant companies to start applying a new version of Kubernetes 1.14 to their offering. So that they can get their customers to start migrating their applications that run on Windows virtualization platform to windows containers quicker.

NOTE: Azure, Google, IBM, AWS now offer Kubernetes services that offer free cluster management, so you do not have to worry any longer. Windows containers are yet to be offered and should be available soon. Check out their official websites to find out more information on when they will be able to offer windows containers.

In this tutorial, I will go over how to Setup windows node and join that to existing Kubernetes cluster with 1.14 version. If you do not have a Kubernetes cluster and would like to learn how to set it up. Check out the below prerequisites.

Prerequisites

NOTE: Before we move forward, ensure you have successfully setup a Kubernetes 1.14 cluster on your Linux machine. If not, check out the prerequisites.

Enable mixed OS scheduling

The below guide was referenced from Microsoft documentation.

Login to your master node and execute the below commands.

cd ~ && mkdir -p kube/yaml && cd kube/yaml

Confirm kube-proxy DaemonSet is set to Rolling Update:

kubectl get ds/kube-proxy -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' --namespace=kube-system

Download node-selector-patch from GitHub:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/node-selector-patch.yml

Patch your kube-proxy

kubectl patch ds/kube-proxy --patch "$(cat node-selector-patch.yml)" -n=kube-system

Check the status of kube-proxy

kubectl get ds -n kube-system
[rahil@k8s-master-node yaml]$ kubectl get ds -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                               AGE
kube-flannel-ds-amd64     2         2         2       0            2           beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux   106m
kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm                                 106m
kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64                               106m
kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le                             106m
kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x                               106m
kube-proxy                2         2         2       2            2           beta.kubernetes.io/os=linux                                 21h

Your kube-proxy node selector status should show beta.kubernetes.io/os=linux get applied.

Setting up flannel networking

Below guide was referenced from Microsoft documentation.

Since I already have kube-flannel setup from previous tutorial, I will go ahead and edit it by following the below guide and update the values accordingly.

On your master node, edit kube-flannel and apply changes that are needed to configure windows worker node.

kubectl edit cm -n kube-system kube-flannel-cfg

If you already know how to use the vi editor, you should be able to navigate with in the edit mode. Go ahead and find the below block of code and update it with the as shown below:

cni-conf.json: |
    {
      "name": "vxlan0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }

And, your net-conf.json should look like this shown below:

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "VNI" : 4096,
        "Port": 4789
      }
    }

Once you have updated your kube-flannel configmap, go ahead and save it to apply those changes.

Target your kube-flannel to only Linux by executing the below command:

kubectl patch ds/kube-flannel-ds-amd64 --patch "$(cat node-selector-patch.yml)" -n=kube-system

Install Docker on your Windows node

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name Docker -ProviderName DockerMsftProvider
Restart-Computer -Force

Download and stage Kubernetes packages

Step 1: Open PowerShell as an administrator and execute the below command to create a directory called k.

mkdir c:\k; cd c:\k

Step 2: Download Kubernetes 1.14.0 from github and download kubernetes-node-windows-amd64.tar.gz.

Step 3: Extract the package to c:\k path on your Windows node.

NOTE: You may have to use a third party tool to extract tar and gz files. I recommend using portable 7zip from here. So that you don’t have to install it.

Find kubeadm,kubectl, kubelet, and kube-proxy and copy it on windows node under c:\k\. Should look like below.

Copy Kubernetes certificate file from master node

Go to your master node under ~/.kube/config of your user home directory and paste it to c:\k\config.

You can use xcopy or winscp to download config file from master node to windows node.

Add paths to environment variables

Open PowerShell as an administrator and execute the following commands:

$env:Path += ";C:\k"; $env:KUBECONFIG="C:\k\config"; [Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\k", [EnvironmentVariableTarget]::Machine); [Environment]::SetEnvironmentVariable("KUBECONFIG", "C:\k\config", [EnvironmentVariableTarget]::User)

Reboot your system before moving forward.

Joining Windows Server node to Master node

To join the Flannel network, execute the below command to download the script.

Step 1: Open PowerShell as Administrator:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
 wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/start.ps1 -o c:\k\start.ps1

Step 2: Navigate to c:\k\

cd c:\k\

Step 3: Execute the below command to join Flannel cluster

.\start.ps1 -ManagementIP 192.168.0.123 -NetworkMode overlay -InterfaceName Ethernet -Verbose

Replace ManagementIP with your Windows node IP address. You can execute ipconfig to get these details. To understand the above command, please refer to this guide from Microsoft.

PS C:\k> .\kubectl.exe get nodes
NAME                STATUS     ROLES    AGE   VERSION
k8s-master-node     Ready      master   35h   v1.14.0
k8s-worker-node-1   Ready      <none>   35h   v1.14.0
win-uq3cdgb5r7g     Ready      <none>   11m   v1.14.0

Testing windows containers

If everything went well, and you see your Windows node joined the cluster successfully. You can deploy a Windows container to test if everything is working as expected. Execute the below commands to deploy a Windows container.

Download YAML file:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/simpleweb.yml -O win-webserver.yaml

Create new deployment:

kubectl apply -f .\win-webserver.yaml

Check the status of container:

kubectl get pods -o wide -w

Output

PS C:\k> .\kubectl.exe get pods -o wide -w
NAME                            READY   STATUS              RESTARTS   AGE   IP       NODE              NOMINATED NODE   READINESS GATES
win-webserver-cfcdfb59b-fkqxg   0/1     ContainerCreating   0          40s   <none>   win-uq3cdgb5r7g   <none>           <none>
win-webserver-cfcdfb59b-jbm7s   0/1     ContainerCreating   0          40s   <none>   win-uq3cdgb5r7g   <none>           <none>

Troubleshooting

If you are receiving something like below. That means your kubeletwin/pause wasn’t built correctly. After spending several hours. I dig through all the script that start.ps1 script does, and I found out that whenever Docker image was built, it didn’t use the correct version of container image.

Issue

Error response from daemon: CreateComputeSystem 229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2: The container operating system does not match the host operating system.
(extra info: {"SystemType":"Container","Name":"229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2","Owner":"docker","VolumePath":"\\\\?\\Volume{d03ade10-14ef-4486-aa63-406f2a7e5048}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2","Layers":[{"ID":"7cf9a822-5cb5-5380-98c3-99885c3639f8","Path":"C:\\ProgramData\\docker\\windowsfilter\\83e740543f7683c25c7880388dbe2885f32250e927ab0f2119efae9f68da5178"},{"ID":"600d6d6b-8810-5bf3-ad01-06d0ba1f97a4","Path":"C:\\ProgramData\\docker\\windowsfilter\\529e04c75d56f948819cd62e4886d865d8faac7470be295e7116ddf47ca15251"},{"ID":"f185c0c0-eccf-5ff9-b9fb-2939562b75c3","Path":"C:\\ProgramData\\docker\\windowsfilter\\7640b81c6fff930a838e97c6c793b4fa9360b6505718aa84573999aa41223e80"}],"HostName":"229d5b8cf2ca","HvPartition":false,"EndpointList":["CE799786-A781-41ED-8B1F-C91DFEDB75A9"],"AllowUnqualifiedDNSQuery":true}).

Solution

  1. Go to c:\k\ and open Dockerfile.
  2. Update first line to FROM mcr.microsoft.com/windows/nanoserver:1809 and save the file.
  3. Execute the below command to build an image as administrator from a PowerShell console.
cd c:\k\; docker build -t kubeletwin/pause .
  1. Open win-webserver.yaml and update image tag to image: mcr.microsoft.com/windows/servercore:1809.
  2. Delete and Re-apply your deployment by executing the below command.
kubectl delete win-webserver
kubectl apply -f .\win-webserver.yaml

Now all your pods should show in running state.

PS C:\k> kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
win-webserver-cfcdfb59b-gk6g9   1/1     Running   0          6m44s
win-webserver-cfcdfb59b-q4zxz   1/1     Running   0          6m44s

The post Setup Windows Node with Kubernetes 1.14 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/feed/ 0