Container Archives - TEKSpace Blog https://blog.tekspace.io/category/container/ Tech tutorials for Linux, Kubernetes, PowerShell, and Azure Wed, 30 Aug 2023 15:22:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://blog.tekspace.io/wp-content/uploads/2023/09/cropped-Tekspace-logo-icon-32x32.png Container Archives - TEKSpace Blog https://blog.tekspace.io/category/container/ 32 32 Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/ https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/#respond Mon, 26 Oct 2020 21:35:27 +0000 https://blog.tekspace.io/index.php/2020/10/26/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/ Setup K3S Cluster By default, Rancher K3S comes with Traefik 1.7. We will setup K3S without Traefik ingress in this tutorial. Execute the above command on master node 2 to setup HA.Validate cluster setup: Make sure you have HA Proxy Setup: We will use above command output value to join worker nodes: MetalLB Setup Create […]

The post Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal appeared first on TEKSpace Blog.

]]>
Setup K3S Cluster

By default, Rancher K3S comes with Traefik 1.7. We will setup K3S without Traefik ingress in this tutorial.

  1. Execute below command on master node 1.
curl -sfL https://get.k3s.io | sh -s - server   --datastore-endpoint="mysql://user:pass@tcp(ip_address:3306)/databasename" --disable traefik --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 192.168.1.2 --tls-san k3s.home.lab

Execute the above command on master node 2 to setup HA.
Validate cluster setup:

$ sudo kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k3s-master-1   Ready    master   3m9s   v1.18.9+k3s1

Make sure you have HA Proxy Setup:

##########################################################
#               Kubernetes AP ILB
##########################################################
frontend kubernetes-frontend
    bind 192.168.1.2:6443
    mode tcp
    option tcplog
    default_backend kubernetes-backend

backend kubernetes-backend
    mode tcp
    option tcp-check
    balance roundrobin
    server k3s-master-1 192.168.1.10:6443 check fall 3 rise 2
    server k3s-master-2 192.168.1.20:6443 check fall 3 rise 2
  1. Join worker nodes to K3S Cluster
    Get node token from one of the master node by executing below command:
sudo cat /var/lib/rancher/k3s/server/node-tokenK105c8c5de8deac516ebgd454r45547481d70625ee3e5200acdbe8ea071191debd4::server:gd5de354807077fde4259fd9632ea045454

We will use above command output value to join worker nodes:

curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.2:6443 K3S_TOKEN={{USE_TOKEN_FROM_ABOVE}} sh -
  1. Validate K3S cluster state:
NAME                STATUS   ROLES    AGE     VERSION
k3s-master-1        Ready    master   15m     v1.18.9+k3s1
k3s-worker-node-1   Ready    <none>   3m44s   v1.18.9+k3s1
k3s-worker-node-2   Ready    <none>   2m52s   v1.18.9+k3s1
k3s-master-2        Ready    master   11m     v1.18.9+k3s1

MetalLB Setup

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Create a file called metallb-config.yaml and enter the following values:

apiVersion: v1
kind: ConfigMap
metadata:
    namespace: metallb-system
    name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

Apply changes:

sudo kubectl apply -f metallb-config.yaml

Deploy sample application with service

kubectl create deploy nginx --image nginxkubectl expose deploy nginx --port 80

Check status:

$ kubectl get svc,pods
NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/kubernetes                                ClusterIP      10.43.0.1       <none>          443/TCP                      44m
service/nginx                                     ClusterIP      10.43.14.116    <none>          80/TCP                       31s

NAME                                                 READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-25lpb                            1/1     Running   0          59s

Nginx Ingress setup

In this tutorial, I will be using helm to setup Nginx ingress controller.

  1. Execute the following commands to setup Nginx ingress from client machine with helm, kubectl configured:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install home ingress-nginx/ingress-nginx

Check Ingress controller status:

kubectl --namespace default get services -o wide -w home-ingress-nginx-controller
  1. Setup Ingress by creating home-ingress.yaml and add below values. Replace example.io
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: home-ingress
  namespace: default
spec:
  rules:
    - host: example.io
      http:
        paths:
          - backend:
              serviceName: nginx
              servicePort: 80
            path: /

Execute command to apply:

 kubectl apply -f home-ingress.yaml

Check Status on Ingress with `kubectl get ing` command:

$ kubectl get ing
NAME           CLASS    HOSTS           ADDRESS         PORTS   AGE
home-ingress   <none>   example.io   192.168.1.240   80      8m26s

Letsencrypt setup

  1. Execute below command to create namespaces, pods, and other related configurations:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.0.3/cert-manager.yaml

Once above completes lets validate pods status.
2. Validate setup:

$ kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-cainjector-76c9d55b6f-cp2jf   1/1     Running   0          39s
cert-manager-79c5f9946-qkfzv               1/1     Running   0          38s
cert-manager-webhook-6d4c5c44bb-4mdgc      1/1     Running   0          38s
  1. Setup staging environment by applying the changes below. Update email:
vi staging_issure.yaml

and paste the below values and save the file:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: letsencrypt-staging
spec:
 acme:
   # The ACME server URL
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: john@example.com
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply changes:

kubectl apply -f staging_issure.yaml

We will apply production issure later in this tutorial. We should first test SSL settings prior to making changes to use production certificates.

SSL setup with LetsEncrypt and Nginx Ingress

Before proceeding here, please make sure your dns is setup correctly from your cloud provider or in your home lab to allow traffic from the internet. LetsEncrypt uses HTTP validation to issue certificates, and it needs to reach the correct dns alias from where the cert request has been initiated.

Create new ingress file as shown below:

vi home-ingress-ssl.yaml

Copy and paste in above file:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt-staging
  name: home-ingress
  namespace: default
spec:
  tls:
  - hosts:
    - example.io
    secretName: home-example-io-tls
  rules:
  - host: example.io
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

Apply changes:

kubectl apply -f home-ingress-ssl.yaml

Validate certificate creation:

kubectl describe certificate
Spec:
  Dns Names:
    example.io
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       Issuer
    Name:       letsencrypt-staging
  Secret Name:  home-example-io-tls
Status:
  Conditions:
    Last Transition Time:        2020-10-26T20:19:15Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      False
    Type:                        Ready
    Last Transition Time:        2020-10-26T20:19:18Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      True
    Type:                        Issuing
  Next Private Key Secret Name:  home-example-io-tls-76dqg
Events:
  Type    Reason     Age   From          Message
  ----    ------     ----  ----          -------
  Normal  Issuing    10s   cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  8s    cert-manager  Stored new private key in temporary Secret resource "home-example-io-tls-76dqg"
  Normal  Requested  4s    cert-manager  Created new CertificateRequest resource "home-example-io-tls-h98zf"

Now you can browse your dns URL and validate your certificate. If you see something like below, that means your LetsEncrypt certificate management has been setup successfully.

Set production issure to get valid certificate

Create production issure:

vi production-issure.yaml

Copy and paste the below values into the above file. Update email:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: letsencrypt-prod
spec:
 acme:
   # The ACME server URL
   server: https://acme-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: user@example.com
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply changes:

kubectl apply -f production-issure.yaml

Update home-ingress-ssl.yaml file you created earlier with below values:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt-prod
  name: home-ingress
  namespace: default
spec:
  tls:
  - hosts:
    - example.io
    secretName: home-example-io-tls
  rules:
  - host: example.io
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

Apply changes:

kubectl apply -f home-ingress-ssl.yaml

Validate changes:

NOTE: Give it some time as it may take 2-5 mins to get the cert request to complete.

kubectl describe certificate

Your output should look something like below to get a valid certificate.

Spec:
  Dns Names:
    example.io
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       Issuer
    Name:       letsencrypt-prod
  Secret Name:  home-example-io-tls
Status:
  Conditions:
    Last Transition Time:  2020-10-26T20:43:35Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2021-01-24T19:43:25Z
  Not Before:              2020-10-26T19:43:25Z
  Renewal Time:            2020-12-25T19:43:25Z
  Revision:                2
Events:
  Type    Reason     Age                From          Message
  ----    ------     ----               ----          -------
  Normal  Issuing    24m                cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  24m                cert-manager  Stored new private key in temporary Secret resource "home-example-io-tls-76dqg"
  Normal  Requested  24m                cert-manager  Created new CertificateRequest resource "home-example-io-tls-h98zf"
  Normal  Issuing    105s               cert-manager  Issuing certificate as Secret was previously issued by Issuer.cert-manager.io/letsencrypt-staging
  Normal  Reused     103s               cert-manager  Reusing private key stored in existing Secret resource "home-example-io-tls"
  Normal  Requested  100s               cert-manager  Created new CertificateRequest resource "home-example-io-tls-ccxgf"
  Normal  Issuing    30s (x2 over 23m)  cert-manager  The certificate has been successfully issued

Browse your application and check for a valid certificate. If it looks something like below, that means you have successfully requested a valid certificate from LetsEncrypt certificate authority.

The post Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/feed/ 0
Raspberry Pi 4 K3S cluster setup https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/ https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/#respond Sun, 25 Oct 2020 17:22:40 +0000 https://blog.tekspace.io/index.php/2020/10/25/raspberry-pi-4-k3s-cluster-setup/ In this tutorial, I will show you how to set up a lightweight Kubernetes cluster using rancher k3s. My current Raspberry Pi 4 configuration: Hostname RAM CPU Disk IP Address k3s-master-1 8 GB 4 64 GB 192.168.1.10 k3s-worker-node-1 8 GB 4 64 GB 192.168.1.11 k3s-worker-node-2 8 GB 4 64 GB 192.168.1.12 Prerequisite set values as shown below: Apply changes by executing […]

The post Raspberry Pi 4 K3S cluster setup appeared first on TEKSpace Blog.

]]>
In this tutorial, I will show you how to set up a lightweight Kubernetes cluster using rancher k3s.

My current Raspberry Pi 4 configuration:

HostnameRAMCPUDiskIP Address
k3s-master-18 GB464 GB192.168.1.10
k3s-worker-node-18 GB464 GB192.168.1.11
k3s-worker-node-28 GB464 GB192.168.1.12

Prerequisite

  1. Ubuntu OS 20.04. Follow guide on here.
  2. Assign static IP address to PI.
sudo vi /etc/netplan/50-cloud-init.yaml

set values as shown below:

  • dhcp set value from true to no.
  • As shown below added 5-9 lines below dhcp, change IP address as per your network setup and save the file.
network:
    ethernets:
        eth0:
            dhcp4: no
            addresses:
              - 192.168.1.10/24
            gateway4: 192.168.1.1
            nameservers:
              addresses: [192.168.1.1, 8.8.8.8, 1.1.1.1]
            match:
                driver: bcmgenet smsc95xx lan78xx
            optional: true
            set-name: eth0
    version: 2

Apply changes by executing the below command to set static IP:

sudo netplan apply

NOTE: Once you apply changes, your ssh session will be interrupted. Make sure to reconnect using ssh.

  1. Open sudo vi /boot/firmware/cmdline.txt in edit mode and enter below value and apply this same change on each master and work nodes:
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

Mine looks like below.

net.ifnames=0 dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
  1. reboot
sudo init 6

Getting started with HA setup for Master nodes

Before we proceed with the tutorial, please make sure you have a MySQL database configured. We will use MySQL to set up HA.

On each master node, execute the below command to set up k3s cluster.

curl -sfL https://get.k3s.io | sh -s - server   --datastore-endpoint="mysql://username:password@tcp(192.168.1.50:3306)/kdb"

Once the installation is completed, execute the below command to see the status of the k3s service.

Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2020-10-25 04:38:00 UTC; 32s ago
       Docs: https://k3s.io
    Process: 1696 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 1720 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 1725 (k3s-server)
      Tasks: 13
     Memory: 425.1M
     CGroup: /system.slice/k3s.service
             ├─1725 /usr/local/bin/k3s server --datastore-endpoint=mysql://username:password@tcp(192.168.1.50:3306)/kdb
             └─1865 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd -->

Joining worker nodes to k3s cluster

  1. First, we need to get a token from the master node. Go to your master node and enter the below command to get token:
sudo cat /var/lib/rancher/k3s/server/node-token
  1. Second, we are ready to join all the nodes to the cluster by entering the below command and replacing the XXX token with your own:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 K3S_TOKEN=XXX sh -
  1. Check the status of your worker node status from master node by executing the below command:
sudo kubectl get nodes
ubuntu@k3s-master-1:~$ sudo kubectl get nodes
NAME                STATUS   ROLES    AGE   VERSION
k3s-worker-node-2   Ready    <none>   28m   v1.18.9+k3s1
k3s-master-1        Ready    master   9h    v1.18.9+k3s1
k3s-worker-node-1   Ready    <none>   32m   v1.18.9+k3s1
k3s-master-2        Ready    master   9h    v1.18.9+k3s1

The post Raspberry Pi 4 K3S cluster setup appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/raspberry-pi-4-k3s-cluster-setup/feed/ 0
Rancher Kubernetes Single Node Setup https://blog.tekspace.io/rancher-kubernetes-single-node-setup/ https://blog.tekspace.io/rancher-kubernetes-single-node-setup/#respond Sun, 18 Oct 2020 02:48:13 +0000 https://blog.tekspace.io/index.php/2020/10/18/rancher-kubernetes-single-node-setup/ Pre-req Download rke package and set executable permissions RKE Cluster setup First, we must setup rke cluster configuration file to deploy it to rke node where the cluster will be setup. Continue with interactive configurations to setup single node cluster. Run the below command to setup rke cluster Output: Connecting to Kubernetes cluster HELM Installation […]

The post Rancher Kubernetes Single Node Setup appeared first on TEKSpace Blog.

]]>
Pre-req
  1. VM requirements
    • One Ubuntu 20.04 VM node where RKE Cluster will be running.
    • One Ubuntu 20.04 host node where RKE CLI will be configured to use to setup cluster.
  2. Disable swap and firewall
sudo ufw disable
sudo swapoff -a; sudo sed -i '/swap/d' /etc/fstab
  1. Update sysctl settings
sudo cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
  1. Docker installed on all nodes
    • Login to Ubuntu VM with your sudo account.
    • Execute the following commands:
sudo apt-get update
sudo apt-get upgrade
sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh
  1. New User and add to Docker group
sudo adduser rkeuser
sudo passwd rkeuser >/dev/null 2>&1
sudo usermod -aG docker rkeuser
  1. SSH Key Gen and copy keys
ssh-keygen -t rsa -b 2048
ssh-copy-id rkeuser@192.168.1.188

Download rke package and set executable permissions

wget https://github.com/rancher/rke/releases/download/v1.1.0/rke_linux-amd64
sudo cp rke_linux-amd64 /usr/local/bin/rke
sudo chmod +x /usr/local/bin/rke

RKE Cluster setup

First, we must setup rke cluster configuration file to deploy it to rke node where the cluster will be setup. Continue with interactive configurations to setup single node cluster.

rke config

Run the below command to setup rke cluster

rke up

Output:

INFO[0000] Running RKE version: v1.1.9
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [192.168.1.188] 
INFO[0000] Checking if container [cluster-state-deployer] is running on host [192.168.1.188], try #1 
INFO[0000] Pulling image [rancher/rke-tools:v0.1.65] on host [192.168.1.188], try #1 
INFO[0005] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0005] Starting container [cluster-state-deployer] on host [192.168.1.188], try #1 
INFO[0005] [state] Successfully started [cluster-state-deployer] container on host [192.168.1.188] 
INFO[0005] [certificates] Generating CA kubernetes certificates 
INFO[0005] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0006] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates 
INFO[0006] [certificates] Generating Kubernetes API server certificates
INFO[0006] [certificates] Generating Service account token key 
INFO[0006] [certificates] Generating Kube Controller certificates
INFO[0006] [certificates] Generating Kube Scheduler certificates 
INFO[0006] [certificates] Generating Kube Proxy certificates
INFO[0006] [certificates] Generating Node certificate
INFO[0006] [certificates] Generating admin certificates and kubeconfig
INFO[0006] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0006] [certificates] Generating kube-etcd-192-168-1-188 certificate and key
INFO[0006] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0006] Building Kubernetes cluster
INFO[0006] [dialer] Setup tunnel for host [192.168.1.188]
INFO[0007] [network] Deploying port listener containers 
INFO[0007] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0007] Starting container [rke-etcd-port-listener] on host [192.168.1.188], try #1 
INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.1.188] 
INFO[0007] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0007] Starting container [rke-cp-port-listener] on host [192.168.1.188], try #1 
INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [192.168.1.188] 
INFO[0008] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0008] Starting container [rke-worker-port-listener] on host [192.168.1.188], try #1 
INFO[0008] [network] Successfully started [rke-worker-port-listener] container on host [192.168.1.188] 
INFO[0008] [network] Port listener containers deployed successfully
INFO[0008] [network] Running control plane -> etcd port checks
INFO[0008] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0008] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Running control plane -> worker port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0009] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Running workers -> control plane port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0009] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Checking KubeAPI port Control Plane hosts
INFO[0009] [network] Removing port listener containers  
INFO[0009] Removing container [rke-etcd-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.1.188] 
INFO[0010] Removing container [rke-cp-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.1.188]
INFO[0010] Removing container [rke-worker-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.1.188]
INFO[0010] [network] Port listener containers removed successfully
INFO[0010] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0010] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1
INFO[0010] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0010] Starting container [cert-deployer] on host [192.168.1.188], try #1 
INFO[0010] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1 
INFO[0015] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1 
INFO[0015] Removing container [cert-deployer] on host [192.168.1.188], try #1 
INFO[0015] [reconcile] Rebuilding and updating local kube config 
INFO[0015] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0015] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0015] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.1.188]
INFO[0015] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0016] Starting container [file-deployer] on host [192.168.1.188], try #1
INFO[0016] Successfully started [file-deployer] container on host [192.168.1.188] 
INFO[0016] Waiting for [file-deployer] container to exit on host [192.168.1.188]
INFO[0016] Waiting for [file-deployer] container to exit on host [192.168.1.188]
INFO[0016] Container [file-deployer] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0017] Waiting for [file-deployer] container to exit on host [192.168.1.188] 
INFO[0017] Removing container [file-deployer] on host [192.168.1.188], try #1
INFO[0017] [remove/file-deployer] Successfully removed container on host [192.168.1.188] 
INFO[0017] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes
INFO[0017] [reconcile] Reconciling cluster state
INFO[0017] [reconcile] This is newly generated cluster
INFO[0017] Pre-pulling kubernetes images
INFO[0017] Pulling image [rancher/hyperkube:v1.18.9-rancher1] on host [192.168.1.188], try #1
INFO[0047] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0047] Kubernetes images pulled successfully
INFO[0047] [etcd] Building up etcd plane..
INFO[0047] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0047] Starting container [etcd-fix-perm] on host [192.168.1.188], try #1 
INFO[0047] Successfully started [etcd-fix-perm] container on host [192.168.1.188] 
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188]
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188]
INFO[0047] Container [etcd-fix-perm] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0048] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188] 
INFO[0048] Removing container [etcd-fix-perm] on host [192.168.1.188], try #1
INFO[0048] [remove/etcd-fix-perm] Successfully removed container on host [192.168.1.188] 
INFO[0048] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.1.188], try #1
INFO[0051] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.1.188] 
INFO[0051] Starting container [etcd] on host [192.168.1.188], try #1 
INFO[0051] [etcd] Successfully started [etcd] container on host [192.168.1.188] 
INFO[0051] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.1.188]
INFO[0051] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0051] Starting container [etcd-rolling-snapshots] on host [192.168.1.188], try #1 
INFO[0051] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.1.188] 
INFO[0056] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0057] Starting container [rke-bundle-cert] on host [192.168.1.188], try #1 
INFO[0057] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.1.188] 
INFO[0057] Waiting for [rke-bundle-cert] container to exit on host [192.168.1.188]
INFO[0057] Container [rke-bundle-cert] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0058] Waiting for [rke-bundle-cert] container to exit on host [192.168.1.188] 
INFO[0058] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.1.188]
INFO[0058] Removing container [rke-bundle-cert] on host [192.168.1.188], try #1
INFO[0058] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0058] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0059] [etcd] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0059] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0059] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0059] [etcd] Successfully started etcd plane.. Checking etcd cluster health
INFO[0059] [controlplane] Building up Controller Plane.. 
INFO[0059] Checking if container [service-sidekick] is running on host [192.168.1.188], try #1
INFO[0059] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0059] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0059] Starting container [kube-apiserver] on host [192.168.1.188], try #1 
INFO[0059] [controlplane] Successfully started [kube-apiserver] container on host [192.168.1.188] 
INFO[0059] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.1.188]
INFO[0067] [healthcheck] service [kube-apiserver] on host [192.168.1.188] is healthy 
INFO[0067] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0068] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0068] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0068] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0068] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0068] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0068] Starting container [kube-controller-manager] on host [192.168.1.188], try #1 
INFO[0068] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.1.188] 
INFO[0068] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.1.188]
INFO[0074] [healthcheck] service [kube-controller-manager] on host [192.168.1.188] is healthy 
INFO[0074] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0074] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0074] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0074] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0075] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0075] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0075] Starting container [kube-scheduler] on host [192.168.1.188], try #1
INFO[0075] [controlplane] Successfully started [kube-scheduler] container on host [192.168.1.188] 
INFO[0075] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.1.188]
INFO[0080] [healthcheck] service [kube-scheduler] on host [192.168.1.188] is healthy 
INFO[0080] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0080] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0081] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0081] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0081] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0081] [controlplane] Successfully started Controller Plane..
INFO[0081] [authz] Creating rke-job-deployer ServiceAccount
INFO[0081] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0081] [authz] Creating system:node ClusterRoleBinding
INFO[0081] [authz] system:node ClusterRoleBinding created successfully
INFO[0081] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
INFO[0081] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
INFO[0081] Successfully Deployed state file at [./cluster.rkestate]
INFO[0081] [state] Saving full cluster state to Kubernetes
INFO[0081] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state 
INFO[0081] [worker] Building up Worker Plane..
INFO[0081] Checking if container [service-sidekick] is running on host [192.168.1.188], try #1
INFO[0081] [sidekick] Sidekick container already created on host [192.168.1.188]
INFO[0081] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0081] Starting container [kubelet] on host [192.168.1.188], try #1 
INFO[0081] [worker] Successfully started [kubelet] container on host [192.168.1.188] 
INFO[0081] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.1.188]
INFO[0092] [healthcheck] service [kubelet] on host [192.168.1.188] is healthy 
INFO[0092] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0092] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0093] [worker] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0093] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0093] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0093] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0093] Starting container [kube-proxy] on host [192.168.1.188], try #1
INFO[0093] [worker] Successfully started [kube-proxy] container on host [192.168.1.188] 
INFO[0093] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.1.188]
INFO[0098] [healthcheck] service [kube-proxy] on host [192.168.1.188] is healthy 
INFO[0098] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0099] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0099] [worker] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0099] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0099] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0099] [worker] Successfully started Worker Plane..
INFO[0099] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0099] Starting container [rke-log-cleaner] on host [192.168.1.188], try #1 
INFO[0099] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.1.188] 
INFO[0099] Removing container [rke-log-cleaner] on host [192.168.1.188], try #1
INFO[0100] [remove/rke-log-cleaner] Successfully removed container on host [192.168.1.188] 
INFO[0100] [sync] Syncing nodes Labels and Taints
INFO[0100] [sync] Successfully synced nodes Labels and Taints
INFO[0100] [network] Setting up network plugin: canal   
INFO[0100] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0100] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0100] [addons] Executing deploy job rke-network-plugin
INFO[0115] [addons] Setting up coredns
INFO[0115] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0115] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0115] [addons] Executing deploy job rke-coredns-addon
INFO[0120] [addons] CoreDNS deployed successfully       
INFO[0120] [dns] DNS provider coredns deployed successfully
INFO[0120] [addons] Setting up Metrics Server
INFO[0120] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0120] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0120] [addons] Executing deploy job rke-metrics-addon
INFO[0130] [addons] Metrics Server deployed successfully 
INFO[0130] [ingress] Setting up nginx ingress controller
INFO[0130] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0130] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0130] [addons] Executing deploy job rke-ingress-controller
INFO[0140] [ingress] ingress controller nginx deployed successfully 
INFO[0140] [addons] Setting up user addons
INFO[0140] [addons] no user addons defined
INFO[0140] Finished building Kubernetes cluster successfully

Connecting to Kubernetes cluster

  1. Download latest kubectl
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
  1. Assign executable permissions
chmod +x ./kubectl
  1. Move file to default executable location
sudo mv ./kubectl /usr/local/bin/kubectl
  1. Check kubectl version
kubectl version --client
  1. Copy rancher exported kube cluster YAML file to $HOME/.kube/config
mkdir -p $HOME/.kube
cp kube_config_cluster.yml $HOME/.kube/config
  1. Connect to Kubernetes cluster and get pods
kubectl get pods -A

HELM Installation

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Setup Rancher in Kubernetes cluster

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v0.15.0

Set cert-manager

Define DNS cert request. You can replace rancher.my.org with your own DNS alias.

helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=rancher.my.org
kubectl -n cattle-system rollout status deploy/rancher

NOTE: Make sure to add rancher.my.org in the host entry of your system if you are working in a lab environment if you don’t have a DNS.

Rancher UI instructions here followed from here.

The post Rancher Kubernetes Single Node Setup appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/rancher-kubernetes-single-node-setup/feed/ 0
Container 101 https://blog.tekspace.io/container-101/ https://blog.tekspace.io/container-101/#respond Sat, 30 Mar 2019 01:55:09 +0000 https://blog.tekspace.io/index.php/2019/03/30/container-101/ What is a Container? A Container is a stripped-down lightweight image of an operating system, which is then bundled with the application package and other dependencies to run a Container in an isolated process. A container shares the core components such as kernel, network drivers, system tools, libraries, and many other settings from the host […]

The post Container 101 appeared first on TEKSpace Blog.

]]>
What is a Container?

A Container is a stripped-down lightweight image of an operating system, which is then bundled with the application package and other dependencies to run a Container in an isolated process. A container shares the core components such as kernel, network drivers, system tools, libraries, and many other settings from the host operating system. The purpose of a Container is to have a standard image that is packaged, published, and deployed to a cluster such as Docker swarm or Kubernetes. To learn more about containers history, please visit the official Docker website.

What is a Docker image?

A Docker image contains stripped down operating system image, your application code, and configuration required to run in any Container cluster such as Kubernetes or Docker swarm. An image is built using a Dockerfile that is setup using a set of commands to compile with all the dependencies. To learn more on how to write a Docker file. Please visit this website.

Follow the below guide to get started with Containers

Beginners guide

Follow the below guide in order to get the most out of it.

Intermediate guide

Advanced guide

Thank you for following this tutorial. Please comment and subscribe to receive new tutorials in email.

The post Container 101 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/container-101/feed/ 0
Run your first Windows Container on your Windows 10 https://blog.tekspace.io/run-your-first-windows-container-on-your-windows-10/ https://blog.tekspace.io/run-your-first-windows-container-on-your-windows-10/#respond Sun, 24 Mar 2019 15:59:32 +0000 https://blog.tekspace.io/index.php/2019/03/24/run-your-first-windows-container-on-your-windows-10/ Windows 10 now comes with container features available to pro and enterprise versions. To get started with containers on Windows 10, please make sure the below prerequisites are met. Pre-requisites Let’s ensure we have prerequisites installed before we get started with docker cli and container installation. If you already have the below items installed, you […]

The post Run your first Windows Container on your Windows 10 appeared first on TEKSpace Blog.

]]>
Windows 10 now comes with container features available to pro and enterprise versions. To get started with containers on Windows 10, please make sure the below prerequisites are met.

Pre-requisites

Let’s ensure we have prerequisites installed before we get started with docker cli and container installation. If you already have the below items installed, you can skip them and proceed with the setup.

Windows 10 now comes with container feature available for developers and devops engineers to start using Docker containers in their local environment. To enable containers for Windows 10, execute the below command.

Enable-WindowsOptionalFeature -FeatureName containers -Online -all

NOTE: Upon installation, you will be prompted to reboot your system after the container feature is enabled. It is recommended that you select yes to reboot your system.

Install Docker CLI

Now we will go ahead and download latest docker cli by using the Chocolate package management tool.

Once you have choco installed, go ahead and open PowerShell as an administrator and execute the below command.

choco install docker

You will be asked to say yes or no. Go ahead and continue with the interactive installation process by pressing Y. The output should look like below if the installation was successful.

Now that you have docker cli installed, you are now ready to run your first Docker container.

Installing Docker Enterprise Edition (EE)

To install Docker EE on Windows 10, please make sure above setup is successfully completed. To get started, go ahead and execute the below commands from an elevated PowerShell console.

To go to the Downloads’ folder of your current user.

cd ~\Downloads

Download Docker Enterprise Edition from online

Invoke-WebRequest -UseBasicParsing -OutFile docker-18.09.3.zip https://download.docker.com/components/engine/windows-server/18.09/docker-18.09.3.zip

NOTE: In this tutorial I am using Docker 18.09.3 version. This may change in the future. You can follow the updated document from here.

Unzip the Docker package.

Expand-Archive docker-18.09.3.zip -DestinationPath $Env:ProgramFiles -Force

Execute the below script to set up and start Docker.

# Add Docker to the path for the current session.
$env:path += ";$env:ProgramFiles\docker"

# Optionally, modify PATH to persist across sessions.
$newPath = "$env:ProgramFiles\docker;" +
[Environment]::GetEnvironmentVariable("PATH",
[EnvironmentVariableTarget]::Machine)

[Environment]::SetEnvironmentVariable("PATH", $newPath,
[EnvironmentVariableTarget]::Machine)

# Register the Docker daemon as a service.
dockerd --register-service

# Start the Docker service.
Start-Service docker

Test your Docker setup by executing the below command.

docker container run hello-world:nanoserver

Running your first Docker container

In this example, I will be using nanoserver image from Docker hub to run an IIS application.

Step 1: Let’s first check if we have any Docker images pulled from Docker hub. Based on the above setup for Docker, you should have a hello-world Docker image pulled from Docker hub.

docker images

Step 2: Let’s pull a new Docker image from Docker hub to run nanoserver with IIS configured.

docker pull nanoserver/iis

Your final output should look like below.

docker-iis-nanoserver-for-windows-container

Step 3: After we have pulled the latest image from Docker hub, let’s run our first windows container by executing the below command.

docker run --name nanoiis -d -it -p 80:80 nanoserver/iis

After it will return a container ID that you can use to check container status, configuration, etc.

Step 4: Check our first container status by executing the below command.

docker ps -a -f status=running

Status output:

docker-iis-nanoserver-for-windows-container-status

Step 5: Now let’s get the IP address of our container to access it from the browser.

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" nanoiis

Step 6: Copy the IP address that was returned to the PowerShell console and browse it in Internet Explorer.

In my case, I received 172.19.231.54. Yours may be different.

This is it! You have run your first Windows container on your Windows 10 machine. Thank you for following this tutorial.

The post Run your first Windows Container on your Windows 10 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/run-your-first-windows-container-on-your-windows-10/feed/ 0
Kubernetes Command line https://blog.tekspace.io/kubernetes-command-line/ https://blog.tekspace.io/kubernetes-command-line/#respond Mon, 26 Feb 2018 03:38:31 +0000 https://blog.tekspace.io/index.php/2018/02/25/kubernetes-command-line/ To manage a Kubernetes cluster, you need to know some basic commands to be able to manage the cluster. Below are basic command lines to manage pods, nodes, services, etc. Pods kubectl get pods is a command-line tool used in Kubernetes, a popular container orchestration platform, to retrieve information about pods that are running within […]

The post Kubernetes Command line appeared first on TEKSpace Blog.

]]>
To manage a Kubernetes cluster, you need to know some basic commands to be able to manage the cluster. Below are basic command lines to manage pods, nodes, services, etc.

Pods

kubectl get pods is a command-line tool used in Kubernetes, a popular container orchestration platform, to retrieve information about pods that are running within a cluster. Pods are the smallest deployable units in Kubernetes and can contain one or more containers.

When you run the kubectl get pods command, it queries the Kubernetes API server for information about the pods within the current context or specified namespace. The output of this command will typically display a table containing various columns that provide information about each pod. The columns might include:

  1. NAME: The name of the pod.
  2. READY: This column shows the number of containers in the pod that are ready out of the total number of containers.
  3. STATUS: The current status of the pod, which can be “Running,” “Pending,” “Succeeded,” “Failed,” or “Unknown.”
  4. RESTARTS: The number of times the containers in the pod have been restarted.
  5. AGE: The amount of time that the pod has been running since its creation.
  6. IP: The IP address assigned to the pod within the cluster network.
  7. NODE: The name of the node where the pod is scheduled to run.

Here’s an example output of the kubectl get pods command:

kubectl get pods

Output:

[rahil@k8s-master-node ~]$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-7587c6fdb6-dz7tj   1/1       Running   0          29m

The below command will watch for events and provide you status on console.

kubectl get pods -w

Output:

[rahil@k8s-master-node ~]$ kubectl get pods -w
NAME                     READY     STATUS              RESTARTS   AGE
nginx-7587c6fdb6-dz7tj   0/1       ContainerCreating   0          20s
nginx-7587c6fdb6-dz7tj   1/1       Running   0         58s

If you like to see more details on individual pods, you can use kubectl describe.

kubectl describe pods <pod_name>

Output:

[rahil@k8s-master-node ~]$ kubectl describe pods nginx
Name:           nginx-7587c6fdb6-dz7tj
Namespace:      default
Node:           k8s-worker-node-1/192.168.0.226
Start Time:     Sun, 25 Feb 2018 17:06:46 -0500
Labels:         pod-template-hash=3143729862
                run=nginx
Annotations:    <none>
Status:         Running
IP:             10.244.1.2
Controlled By:  ReplicaSet/nginx-7587c6fdb6
Containers:
  nginx:
    Container ID:   docker://22a5e181b9c7b56351ccdc9ed41c1f6dfd776e56d45e464399d9a92479657a18
    Image:          nginx
    Image ID:       docker-pullable://docker.io/nginx@sha256:4771d09578c7c6a65299e110b3ee1c0a2592f5ea2618d23e4ffe7a4cab1ce5de
    Port:           80/TCP
    State:          Running
      Started:      Sun, 25 Feb 2018 17:07:42 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jpflm (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  default-token-jpflm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jpflm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                        Message
  ----    ------                 ----  ----                        -------
  Normal  Scheduled              6m    default-scheduler           Successfully assigned nginx-7587c6fdb6-dz7tj to k8s-worker-node-1
  Normal  SuccessfulMountVolume  6m    kubelet, k8s-worker-node-1  MountVolume.SetUp succeeded for volume "default-token-jpflm"
  Normal  Pulling                6m    kubelet, k8s-worker-node-1  pulling image "nginx"
  Normal  Pulled                 6m    kubelet, k8s-worker-node-1  Successfully pulled image "nginx"
  Normal  Created                6m    kubelet, k8s-worker-node-1  Created container
  Normal  Started                5m    kubelet, k8s-worker-node-1  Started container

Nodes

The below command will get all the nodes.

kubectl get nodes

Output:

[rahil@k8s-master-node ~]$ kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
k8s-master-node     Ready     master    3h        v1.9.3
k8s-worker-node-1   Ready     <none>    2h        v1.9.3

The below command will provide you with all the information about nodes in table view mode.

kubectl get nodes -o wide

Output:

[rahil@k8s-master-node ~]$ kubectl get nodes -o wide
NAME                STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-master-node     Ready     master    3h        v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6
k8s-worker-node-1   Ready     <none>    2h        v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6

The below command will provide you with detail about a node.

kubectl describe nodes <node_name>

Output:

[rahil@k8s-master-node ~]$ kubectl describe nodes k8s-worker-node-1
Name:               k8s-worker-node-1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-worker-node-1
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"d2:bd:02:7a:cb:65"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=192.168.0.226
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Sun, 25 Feb 2018 14:42:46 -0500
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:44:06 -0500   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.0.226
  Hostname:    k8s-worker-node-1
Capacity:
 cpu:     2
 memory:  916556Ki
 pods:    110
Allocatable:
 cpu:     2
 memory:  814156Ki
 pods:    110
System Info:
 Machine ID:                 46a73b56be064f2fb8d81ac54f2e0349
 System UUID:                8601D8C9-C1D9-2C4B-B6C9-2A3D2A598E4A
 Boot ID:                    963f0791-c81c-4bb1-aecd-6bd079abea67
 Kernel Version:             3.10.0-693.17.1.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.9.3
 Kube-Proxy Version:         v1.9.3
PodCIDR:                     10.244.1.0/24
ExternalID:                  k8s-worker-node-1
Non-terminated Pods:         (3 in total)
  Namespace                  Name                      CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                      ------------  ----------  ---------------  -------------
  default                    nginx-7587c6fdb6-dz7tj    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-flannel-ds-wtdz4     100m (5%)     100m (5%)   50Mi (6%)        50Mi (6%)
  kube-system                kube-proxy-m2hfm          0 (0%)        0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  100m (5%)     100m (5%)   50Mi (6%)        50Mi (6%)
Events:         <none>

Services

The below command will give you all the available services in default namespace.

kubectl get services

Output:

[rahil@k8s-master-node ~]$ kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3h

The below command will give you all the services in all namespaces.

kubectl get services --all-namespaces

Output:

[rahil@k8s-master-node ~]$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP         3h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   3h

The post Kubernetes Command line appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/kubernetes-command-line/feed/ 0
How to get status of all nodes in Kubernetes https://blog.tekspace.io/how-to-get-status-of-all-nodes-in-kubernetes/ https://blog.tekspace.io/how-to-get-status-of-all-nodes-in-kubernetes/#respond Mon, 26 Feb 2018 01:00:22 +0000 https://blog.tekspace.io/index.php/2018/02/25/how-to-get-status-of-all-nodes-in-kubernetes/ In this tutorial, I will show you how to get status of all the nodes in kubernetes cluster. To get status of all nodes, execute below command: output: To get more information about nodes, execute below command: output:

The post How to get status of all nodes in Kubernetes appeared first on TEKSpace Blog.

]]>
In this tutorial, I will show you how to get status of all the nodes in kubernetes cluster.

To get status of all nodes, execute below command:

kubectl get nodes

output:

[rahil@k8s-master-node ~]$ kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
k8s-master-node     Ready     master    41m       v1.9.3
k8s-worker-node-1   Ready     <none>    16m       v1.9.3

To get more information about nodes, execute below command:

kubectl get nodes -o wide

output:

[rahil@k8s-master-node ~]$ kubectl get nodes -o wide
NAME                STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-master-node     Ready     master    41m       v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6
k8s-worker-node-1   Ready     <none>    16m       v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6

The post How to get status of all nodes in Kubernetes appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-get-status-of-all-nodes-in-kubernetes/feed/ 0
How to check status of pods in kubernetes https://blog.tekspace.io/how-to-check-status-of-pods-in-kubernetes/ https://blog.tekspace.io/how-to-check-status-of-pods-in-kubernetes/#respond Mon, 26 Feb 2018 00:55:43 +0000 https://blog.tekspace.io/index.php/2018/02/25/how-to-check-status-of-pods-in-kubernetes/ To check status of a pod in kubernetes cluster. Connect to master node and execute below command. Below command will give status of all pods from default namespace. kubectl get pods If you like to check the status of pods in system namespace, execute below command. kubectl get pods --all-namespaces If you like to check […]

The post How to check status of pods in kubernetes appeared first on TEKSpace Blog.

]]>

To check status of a pod in kubernetes cluster. Connect to master node and execute below command.

Below command will give status of all pods from default namespace.

kubectl get pods

If you like to check the status of pods in system namespace, execute below command.

kubectl get pods --all-namespaces

If you like to check the status of pods and node it is running on execute below command:

kubectl get pods -o wide

or

kubectl get pods --all-namespaces -o wide

The post How to check status of pods in kubernetes appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-check-status-of-pods-in-kubernetes/feed/ 0
Setup Kubernetes Cluster on CentOS 7 https://blog.tekspace.io/setup-kubernetes-cluster-on-centos-7/ https://blog.tekspace.io/setup-kubernetes-cluster-on-centos-7/#respond Sun, 25 Feb 2018 09:20:47 +0000 https://blog.tekspace.io/index.php/2018/02/25/setup-kubernetes-cluster-on-centos-7/ In this tutorial, we will use kubeadm to configure a Kubernetes cluster on CentOS 7.4. IMPORTANT NOTE: Ensure swap is disabled on both master and worker nodes. Kubernetes requires swap to be disabled in order for it to successfully configure Kubernetes Cluster. Before you start setting up Kubernetes cluster, it is recommended that you update […]

The post Setup Kubernetes Cluster on CentOS 7 appeared first on TEKSpace Blog.

]]>
In this tutorial, we will use kubeadm to configure a Kubernetes cluster on CentOS 7.4.

IMPORTANT NOTE: Ensure swap is disabled on both master and worker nodes. Kubernetes requires swap to be disabled in order for it to successfully configure Kubernetes Cluster.

Before you start setting up Kubernetes cluster, it is recommended that you update your system to ensure all security updates are up-to-date.

Execute the below command:

sudo yum update -y

Install Docker

In order to configure Kubernetes cluster, it is required to install Docker. Execute the below command to install Docker on your system.

sudo yum install -y docker

Enable & start Docker service.

sudo systemctl enable docker && sudo systemctl start docker

Verify Docker version is 1.12 and greater.

sudo docker version
[rahil@k8s-master ~]$ sudo docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
 Go version:      go1.8.3
 Git commit:      3e8e77d/1.12.6
 Built:           Tue Jan 30 09:17:00 2018
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
 Go version:      go1.8.3
 Git commit:      3e8e77d/1.12.6
 Built:           Tue Jan 30 09:17:00 2018
 OS/Arch:         linux/amd64

Install Kubernetes packages

Configure yum to install kubeadm, kubectl, and kubelet.

Copy the below content and execute on your CentOS.

sudo bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF'

Disable SELinux:

sudo setenforce 0

In order for Kubernetes cluster to communicate internally, we have to disable SELinux. Currently, SELinux is not fully supported. In the future, this may change when support for SELinux is improved.

Installing packages for Kubernetes:

On Master node:

sudo yum install -y kubelet kubeadm kubectl

Enable & start kubelet service:

sudo systemctl enable kubelet && sudo systemctl start kubelet

On worker nodes:

sudo yum install -y kubelet kubeadm kubectl

Copy the below content and execute on CentOS master and worker nodes. There has been an issue reported that traffic in IPTable has been routed incorrectly. The below settings will make sure IPTable is configured correctly.

sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF'

Execute the below command to apply the above changes.

sudo sysctl --system

Once you have completed the above steps on all your CentOS nodes, including master & worker nodes. Let’s go ahead and configure Master node.

Disable Firewall on Master node

Kubernets Cluster uses IPTables to manage inbound and outbound traffic. In order to avoid conflict, we will disable firewalld on the CentOS 7 system. If you prefer to keep the firewall enabled. I recommend allowing port 6443 to allow communication from worker node to master node.

Disable:

sudo systemctl disable firewalld

Stop:

sudo systemctl stop firewalld

Check status:

sudo systemctl status firewalld

Configuring Kubernetes Master node

On your CentOS master node, execute the following commands:

sudo kubeadm init --pod-network-cidr 10.244.0.0/16

NOTE: Ensure SWAP is disabled on all your CentOS systems. Kubernetes cluster configuration will fail if swap is not disabled.

Once the above command is completed, it will output Kubernetes cluster information. Please make sure that you have the token information somewhere safe. It will be needed to join worker nodes to Kubernetes cluster.

Output from kubeadm init:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token a2dc82.7e936a7ba007f01e 10.0.0.7:6443 --discovery-token-ca-cert-hash sha256:30aca9f9c04f829a13c925224b34c47df0a784e9ba94e132a983658a70ee2914

On Master node apply the below changes after kubeadm init successful configuration:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Configuring Pod Networking

Before we Setup worker nodes, we need to ensure pod networking is functional. Pod networking is also a dependency for kube-dns pod to manage pod dns.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Ensure all pods are in running status by executing the below command:

kubectl get pods --all-namespaces

It may take some time depending on your system configuration and network speed. It pulls all the images from online to run required pods in system namespace.

Once all the pods are in running status, let’s configure worker nodes.

Configure Kubernetes Worker nodes

To configure worker nodes to be part of a Kubernetes cluster. We need to use kubeadm join command with token received from master node.

Execute the below command to join worker node to Kubernetes Cluster.

sudo kubeadm join --token a2dc82.7e936a7ba007f01e 10.0.0.7:6443 --discovery-token-ca-cert-hash sha256:30aca9f9c04f829a13c925224b34c47df0a784e9ba94e132a983658a70ee2914

Once the node has joined the cluster, you will see similar output on your console.

[preflight] Running pre-flight checks.
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
        [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "10.0.0.8:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.0.8:6443"
[discovery] Requesting info from "https://10.0.0.8:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.0.8:6443"
[discovery] Successfully established connection with API Server "10.0.0.8:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

On Kubernetes master node, execute the below command to see node status. If you see node status ready, that means your worker node is ready to host pods.

kubectl get nodes

Output for the above command

[rahil@k8s-master-node ~]$  kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
k8s-master-node     Ready     master    28m       v1.9.3
k8s-worker-node-1   Ready     <none>    3m        v1.9.3

If you see worker node status ready, then you are ready to deploy pods on your worker nodes.

The post Setup Kubernetes Cluster on CentOS 7 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-kubernetes-cluster-on-centos-7/feed/ 0
Kubernetes Dashboard https://blog.tekspace.io/kubernetes-dashboard/ https://blog.tekspace.io/kubernetes-dashboard/#respond Mon, 12 Feb 2018 18:12:45 +0000 https://blog.tekspace.io/index.php/2018/02/12/kubernetes-dashboard/ Kubernetes Dashboard allows you to manage pods and cluster configuration from a web user interface (UI). You can write your own YAML or json file and upload it via Dashboard, and it will automatically create deployments for you. Kubernetes Dashboard is free to install, and you can follow the below steps. I will be using […]

The post Kubernetes Dashboard appeared first on TEKSpace Blog.

]]>
Kubernetes Dashboard allows you to manage pods and cluster configuration from a web user interface (UI). You can write your own YAML or json file and upload it via Dashboard, and it will automatically create deployments for you.

Kubernetes Dashboard is free to install, and you can follow the below steps. I will be using Kubernetes v1.9.2. Depending on when you are reading this blog, a version may change.

Login to your master node and execute the below command:

sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

The above command will create a deployment in Kubernetes. You can check the status of the pod by executing the following command:

sudo kubectl get pods --all-namespaces

If you see the status ContainerCreating, wait for a few minutes. It is in the process of downloading the image and spinning up a pod for you. After a few minutes, the status of the pod should change to running status.

To access Kubernetes Dashboard after deployment, use the command below on your remote machine to open a proxy connection to the Kubernetes master node.

kubectl proxy

Kubernetes Dashboard needs cluster role permission in order to access it from remotely. Follow this article here.

Also, if you haven’t already installed Kubernetes CLI on a remote machine to manage Kubernetes Cluster, I recommend this article as well.

The post Kubernetes Dashboard appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/kubernetes-dashboard/feed/ 0