k8s Archives - TEKSpace Blog https://blog.tekspace.io/tag/k8s/ Tech tutorials for Linux, Kubernetes, PowerShell, and Azure Tue, 19 Mar 2024 19:59:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://blog.tekspace.io/wp-content/uploads/2023/09/cropped-Tekspace-logo-icon-32x32.png k8s Archives - TEKSpace Blog https://blog.tekspace.io/tag/k8s/ 32 32 How to create docker registry credentials using kubectl https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/ https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/#respond Tue, 19 Mar 2024 19:55:55 +0000 https://blog.tekspace.io/?p=1839 Dive into our comprehensive guide on seamlessly creating Docker registry credentials with kubectl. Whether you're a beginner or an experienced Kubernetes administrator, this article demystifies the process of securely managing Docker registry access. Learn the step-by-step method to generate and update your regcred secret for Docker registry authentication, ensuring your Kubernetes deployments can pull images without a hitch. Perfect for DevOps professionals and developers alike, this tutorial not only simplifies Kubernetes secrets management but also introduces best practices to maintain continuous access to private Docker images. Enhance your Kubernetes skills today and keep your containerized applications running smoothly.

The post How to create docker registry credentials using kubectl appeared first on TEKSpace Blog.

]]>
Updating a Docker registry secret (often named regcred in Kubernetes environments) with new credentials can be essential for workflows that need access to private registries for pulling images. This process involves creating a new secret with the updated credentials and then patching or updating the deployments or pods that use this secret.

Here’s a step-by-step guide to do it:

Step 1: Create a New Secret with Updated Credentials

  1. Log in to Docker Registry: Before updating the secret, ensure you’re logged into the Docker registry from your command line interface so that Kubernetes can access it.
  2. Create or Update the Secret: Use the kubectl create secret command to create a new secret or update an existing one with your Docker credentials. If you’re updating an existing secret, you might need to delete the old secret first. To create a new secret (or replace an existing one)
kubectl create secret docker-registry regcred \
  --docker-server=<YOUR_REGISTRY_SERVER> \ # The URL of your Docker registry
  --docker-username=<YOUR_USERNAME> \ # Your Docker registry username
  --docker-password=<YOUR_PASSWORD> \ # Your Docker registry password
  --docker-email=<YOUR_EMAIL> \ # Your Docker registry email
  --namespace=<NAMESPACE> \ # The Kubernetes namespace where the secret will be used
  --dry-run=client -o yaml | kubectl apply -f -

Replace <YOUR_REGISTRY_SERVER>, <YOUR_USERNAME>, <YOUR_PASSWORD>, <YOUR_EMAIL>, and <NAMESPACE> with your Docker registry details and the appropriate namespace. The --dry-run=client -o yaml | kubectl apply -f - part generates the secret definition and applies it to your cluster, effectively updating the secret if it already exists.

Step 2: Update Deployments or Pods to Use the New Secret

If you’ve created a new secret with a different name, you’ll need to update your deployment or pod specifications to reference the new secret name. This step is unnecessary if you’ve updated an existing secret.

  1. Edit Deployment or Pod Specification: Locate your deployment or pod definition files (YAML files) and update the imagePullSecrets section to reference the new secret name if it has changed.
  2. Apply the Changes: Use kubectl apply -f <deployment-or-pod-file>.yaml to apply the changes to your cluster.

Step 3: Verify the Update

Ensure that your deployments or pods can successfully pull images using the updated credentials.

  1. Check Pod Status: Use kubectl get pods to check the status of your pods. Ensure they are running and not stuck in a ImagePullBackOff or similar error status due to authentication issues.
  2. Check Logs: For further verification, check the logs of your pods or deployments to ensure there are no errors related to pulling images from the Docker registry. You can use kubectl logs <pod-name> to view logs.

This method ensures that your Kubernetes deployments can continue to pull images from private registries without interruption, using the updated credentials.

The post How to create docker registry credentials using kubectl appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/feed/ 0
Rancher Kubernetes Single Node Setup https://blog.tekspace.io/rancher-kubernetes-single-node-setup/ https://blog.tekspace.io/rancher-kubernetes-single-node-setup/#respond Sun, 18 Oct 2020 02:48:13 +0000 https://blog.tekspace.io/index.php/2020/10/18/rancher-kubernetes-single-node-setup/ Pre-req Download rke package and set executable permissions RKE Cluster setup First, we must setup rke cluster configuration file to deploy it to rke node where the cluster will be setup. Continue with interactive configurations to setup single node cluster. Run the below command to setup rke cluster Output: Connecting to Kubernetes cluster HELM Installation

The post Rancher Kubernetes Single Node Setup appeared first on TEKSpace Blog.

]]>
Pre-req
  1. VM requirements
    • One Ubuntu 20.04 VM node where RKE Cluster will be running.
    • One Ubuntu 20.04 host node where RKE CLI will be configured to use to setup cluster.
  2. Disable swap and firewall
sudo ufw disable
sudo swapoff -a; sudo sed -i '/swap/d' /etc/fstab
  1. Update sysctl settings
sudo cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
  1. Docker installed on all nodes
    • Login to Ubuntu VM with your sudo account.
    • Execute the following commands:
sudo apt-get update
sudo apt-get upgrade
sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh
  1. New User and add to Docker group
sudo adduser rkeuser
sudo passwd rkeuser >/dev/null 2>&1
sudo usermod -aG docker rkeuser
  1. SSH Key Gen and copy keys
ssh-keygen -t rsa -b 2048
ssh-copy-id rkeuser@192.168.1.188

Download rke package and set executable permissions

wget https://github.com/rancher/rke/releases/download/v1.1.0/rke_linux-amd64
sudo cp rke_linux-amd64 /usr/local/bin/rke
sudo chmod +x /usr/local/bin/rke

RKE Cluster setup

First, we must setup rke cluster configuration file to deploy it to rke node where the cluster will be setup. Continue with interactive configurations to setup single node cluster.

rke config

Run the below command to setup rke cluster

rke up

Output:

INFO[0000] Running RKE version: v1.1.9
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [192.168.1.188] 
INFO[0000] Checking if container [cluster-state-deployer] is running on host [192.168.1.188], try #1 
INFO[0000] Pulling image [rancher/rke-tools:v0.1.65] on host [192.168.1.188], try #1 
INFO[0005] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0005] Starting container [cluster-state-deployer] on host [192.168.1.188], try #1 
INFO[0005] [state] Successfully started [cluster-state-deployer] container on host [192.168.1.188] 
INFO[0005] [certificates] Generating CA kubernetes certificates 
INFO[0005] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0006] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates 
INFO[0006] [certificates] Generating Kubernetes API server certificates
INFO[0006] [certificates] Generating Service account token key 
INFO[0006] [certificates] Generating Kube Controller certificates
INFO[0006] [certificates] Generating Kube Scheduler certificates 
INFO[0006] [certificates] Generating Kube Proxy certificates
INFO[0006] [certificates] Generating Node certificate
INFO[0006] [certificates] Generating admin certificates and kubeconfig
INFO[0006] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0006] [certificates] Generating kube-etcd-192-168-1-188 certificate and key
INFO[0006] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0006] Building Kubernetes cluster
INFO[0006] [dialer] Setup tunnel for host [192.168.1.188]
INFO[0007] [network] Deploying port listener containers 
INFO[0007] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0007] Starting container [rke-etcd-port-listener] on host [192.168.1.188], try #1 
INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.1.188] 
INFO[0007] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0007] Starting container [rke-cp-port-listener] on host [192.168.1.188], try #1 
INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [192.168.1.188] 
INFO[0008] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0008] Starting container [rke-worker-port-listener] on host [192.168.1.188], try #1 
INFO[0008] [network] Successfully started [rke-worker-port-listener] container on host [192.168.1.188] 
INFO[0008] [network] Port listener containers deployed successfully
INFO[0008] [network] Running control plane -> etcd port checks
INFO[0008] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0008] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Running control plane -> worker port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0009] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Running workers -> control plane port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0009] Starting container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [192.168.1.188] 
INFO[0009] Removing container [rke-port-checker] on host [192.168.1.188], try #1 
INFO[0009] [network] Checking KubeAPI port Control Plane hosts
INFO[0009] [network] Removing port listener containers  
INFO[0009] Removing container [rke-etcd-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.1.188] 
INFO[0010] Removing container [rke-cp-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.1.188]
INFO[0010] Removing container [rke-worker-port-listener] on host [192.168.1.188], try #1
INFO[0010] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.1.188]
INFO[0010] [network] Port listener containers removed successfully
INFO[0010] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0010] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1
INFO[0010] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0010] Starting container [cert-deployer] on host [192.168.1.188], try #1 
INFO[0010] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1 
INFO[0015] Checking if container [cert-deployer] is running on host [192.168.1.188], try #1 
INFO[0015] Removing container [cert-deployer] on host [192.168.1.188], try #1 
INFO[0015] [reconcile] Rebuilding and updating local kube config 
INFO[0015] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0015] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0015] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.1.188]
INFO[0015] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0016] Starting container [file-deployer] on host [192.168.1.188], try #1
INFO[0016] Successfully started [file-deployer] container on host [192.168.1.188] 
INFO[0016] Waiting for [file-deployer] container to exit on host [192.168.1.188]
INFO[0016] Waiting for [file-deployer] container to exit on host [192.168.1.188]
INFO[0016] Container [file-deployer] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0017] Waiting for [file-deployer] container to exit on host [192.168.1.188] 
INFO[0017] Removing container [file-deployer] on host [192.168.1.188], try #1
INFO[0017] [remove/file-deployer] Successfully removed container on host [192.168.1.188] 
INFO[0017] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes
INFO[0017] [reconcile] Reconciling cluster state
INFO[0017] [reconcile] This is newly generated cluster
INFO[0017] Pre-pulling kubernetes images
INFO[0017] Pulling image [rancher/hyperkube:v1.18.9-rancher1] on host [192.168.1.188], try #1
INFO[0047] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0047] Kubernetes images pulled successfully
INFO[0047] [etcd] Building up etcd plane..
INFO[0047] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0047] Starting container [etcd-fix-perm] on host [192.168.1.188], try #1 
INFO[0047] Successfully started [etcd-fix-perm] container on host [192.168.1.188] 
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188]
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188]
INFO[0047] Container [etcd-fix-perm] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0048] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.188] 
INFO[0048] Removing container [etcd-fix-perm] on host [192.168.1.188], try #1
INFO[0048] [remove/etcd-fix-perm] Successfully removed container on host [192.168.1.188] 
INFO[0048] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.1.188], try #1
INFO[0051] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.1.188] 
INFO[0051] Starting container [etcd] on host [192.168.1.188], try #1 
INFO[0051] [etcd] Successfully started [etcd] container on host [192.168.1.188] 
INFO[0051] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.1.188]
INFO[0051] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0051] Starting container [etcd-rolling-snapshots] on host [192.168.1.188], try #1 
INFO[0051] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.1.188] 
INFO[0056] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0057] Starting container [rke-bundle-cert] on host [192.168.1.188], try #1 
INFO[0057] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.1.188] 
INFO[0057] Waiting for [rke-bundle-cert] container to exit on host [192.168.1.188]
INFO[0057] Container [rke-bundle-cert] is still running on host [192.168.1.188]: stderr: [], stdout: []
INFO[0058] Waiting for [rke-bundle-cert] container to exit on host [192.168.1.188] 
INFO[0058] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.1.188]
INFO[0058] Removing container [rke-bundle-cert] on host [192.168.1.188], try #1
INFO[0058] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0058] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0059] [etcd] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0059] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0059] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0059] [etcd] Successfully started etcd plane.. Checking etcd cluster health
INFO[0059] [controlplane] Building up Controller Plane.. 
INFO[0059] Checking if container [service-sidekick] is running on host [192.168.1.188], try #1
INFO[0059] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0059] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0059] Starting container [kube-apiserver] on host [192.168.1.188], try #1 
INFO[0059] [controlplane] Successfully started [kube-apiserver] container on host [192.168.1.188] 
INFO[0059] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.1.188]
INFO[0067] [healthcheck] service [kube-apiserver] on host [192.168.1.188] is healthy 
INFO[0067] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0068] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0068] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0068] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0068] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0068] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0068] Starting container [kube-controller-manager] on host [192.168.1.188], try #1 
INFO[0068] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.1.188] 
INFO[0068] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.1.188]
INFO[0074] [healthcheck] service [kube-controller-manager] on host [192.168.1.188] is healthy 
INFO[0074] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188] 
INFO[0074] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0074] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0074] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0075] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0075] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0075] Starting container [kube-scheduler] on host [192.168.1.188], try #1
INFO[0075] [controlplane] Successfully started [kube-scheduler] container on host [192.168.1.188] 
INFO[0075] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.1.188]
INFO[0080] [healthcheck] service [kube-scheduler] on host [192.168.1.188] is healthy 
INFO[0080] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0080] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0081] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0081] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0081] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0081] [controlplane] Successfully started Controller Plane..
INFO[0081] [authz] Creating rke-job-deployer ServiceAccount
INFO[0081] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0081] [authz] Creating system:node ClusterRoleBinding
INFO[0081] [authz] system:node ClusterRoleBinding created successfully
INFO[0081] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
INFO[0081] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
INFO[0081] Successfully Deployed state file at [./cluster.rkestate]
INFO[0081] [state] Saving full cluster state to Kubernetes
INFO[0081] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state 
INFO[0081] [worker] Building up Worker Plane..
INFO[0081] Checking if container [service-sidekick] is running on host [192.168.1.188], try #1
INFO[0081] [sidekick] Sidekick container already created on host [192.168.1.188]
INFO[0081] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188] 
INFO[0081] Starting container [kubelet] on host [192.168.1.188], try #1 
INFO[0081] [worker] Successfully started [kubelet] container on host [192.168.1.188] 
INFO[0081] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.1.188]
INFO[0092] [healthcheck] service [kubelet] on host [192.168.1.188] is healthy 
INFO[0092] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0092] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0093] [worker] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0093] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0093] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0093] Image [rancher/hyperkube:v1.18.9-rancher1] exists on host [192.168.1.188]
INFO[0093] Starting container [kube-proxy] on host [192.168.1.188], try #1
INFO[0093] [worker] Successfully started [kube-proxy] container on host [192.168.1.188] 
INFO[0093] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.1.188]
INFO[0098] [healthcheck] service [kube-proxy] on host [192.168.1.188] is healthy 
INFO[0098] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0099] Starting container [rke-log-linker] on host [192.168.1.188], try #1 
INFO[0099] [worker] Successfully started [rke-log-linker] container on host [192.168.1.188] 
INFO[0099] Removing container [rke-log-linker] on host [192.168.1.188], try #1
INFO[0099] [remove/rke-log-linker] Successfully removed container on host [192.168.1.188] 
INFO[0099] [worker] Successfully started Worker Plane..
INFO[0099] Image [rancher/rke-tools:v0.1.65] exists on host [192.168.1.188]
INFO[0099] Starting container [rke-log-cleaner] on host [192.168.1.188], try #1 
INFO[0099] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.1.188] 
INFO[0099] Removing container [rke-log-cleaner] on host [192.168.1.188], try #1
INFO[0100] [remove/rke-log-cleaner] Successfully removed container on host [192.168.1.188] 
INFO[0100] [sync] Syncing nodes Labels and Taints
INFO[0100] [sync] Successfully synced nodes Labels and Taints
INFO[0100] [network] Setting up network plugin: canal   
INFO[0100] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0100] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0100] [addons] Executing deploy job rke-network-plugin
INFO[0115] [addons] Setting up coredns
INFO[0115] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0115] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0115] [addons] Executing deploy job rke-coredns-addon
INFO[0120] [addons] CoreDNS deployed successfully       
INFO[0120] [dns] DNS provider coredns deployed successfully
INFO[0120] [addons] Setting up Metrics Server
INFO[0120] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0120] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0120] [addons] Executing deploy job rke-metrics-addon
INFO[0130] [addons] Metrics Server deployed successfully 
INFO[0130] [ingress] Setting up nginx ingress controller
INFO[0130] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0130] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0130] [addons] Executing deploy job rke-ingress-controller
INFO[0140] [ingress] ingress controller nginx deployed successfully 
INFO[0140] [addons] Setting up user addons
INFO[0140] [addons] no user addons defined
INFO[0140] Finished building Kubernetes cluster successfully

Connecting to Kubernetes cluster

  1. Download latest kubectl
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
  1. Assign executable permissions
chmod +x ./kubectl
  1. Move file to default executable location
sudo mv ./kubectl /usr/local/bin/kubectl
  1. Check kubectl version
kubectl version --client
  1. Copy rancher exported kube cluster YAML file to $HOME/.kube/config
mkdir -p $HOME/.kube
cp kube_config_cluster.yml $HOME/.kube/config
  1. Connect to Kubernetes cluster and get pods
kubectl get pods -A

HELM Installation

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Setup Rancher in Kubernetes cluster

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v0.15.0

Set cert-manager

Define DNS cert request. You can replace rancher.my.org with your own DNS alias.

helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=rancher.my.org
kubectl -n cattle-system rollout status deploy/rancher

NOTE: Make sure to add rancher.my.org in the host entry of your system if you are working in a lab environment if you don’t have a DNS.

Rancher UI instructions here followed from here.

The post Rancher Kubernetes Single Node Setup appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/rancher-kubernetes-single-node-setup/feed/ 0
How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/ https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/#respond Mon, 14 Sep 2020 18:40:34 +0000 https://blog.tekspace.io/index.php/2020/09/14/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/ NOTE: This tutorial assumes you know how to connect to a Kubernetes cluster. Create secrets to securely store MySQL credentials Output: Persistant volume and MySQL deployment Execute the below command to create persistent volume: Output: Execute the below command to deploy MySQL pod: Output: Exposing MySQL as a Service Output: Output:

The post How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster appeared first on TEKSpace Blog.

]]>

NOTE: This tutorial assumes you know how to connect to a Kubernetes cluster.

Create secrets to securely store MySQL credentials

  1. Guide on how to create base 64 encoded values:

    Windows 10 guideLinux guide

  2. Create a new file called: mysql-secret.yaml and paste the value below.
    NOTE: You must first capture the value in base 64 by following the guide in step 1.
---
apiVersion: v1
kind: Secret
metadata:
  name: mysqldb-secrets
type: Opaque
data:
  ROOT_PASSWORD: c3VwZXItc2VjcmV0LXBhc3N3b3JkLWZvci1zcWw= 
  1. Execute the below command to create secrets:
kubectl apply -f mysql-secret.yaml

Output:

secret/mysqldb-secrets created
  1. To see if the secret is created, execute the below command:
kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-jqq69   kubernetes.io/service-account-token   3      6h20m
echo-tls              kubernetes.io/tls                     2      5h19m
mysqldb-secrets       Opaque                                1      42s
  1. To see the description of the secret, execute the below command:
kubectl describe secret mysqldb-secrets
Name:         mysqldb-secrets
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
ROOT_PASSWORD:  29 bytes

Persistant volume and MySQL deployment

  1. Create a persistent volume YAML file called: mysql-pvc.yaml and paste the following values:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pvc
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/mysql-data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage
  1. Create a new deployment YAML file called: mysql-deployment.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysqldb-secrets
              key: ROOT_PASSWORD
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc-claim

Execute the below command to create persistent volume:

kubectl apply -f mysql-pvc.yaml

Output:

persistentvolume/mysql-pvc createdpersistentvolumeclaim/mysql-pvc-claim created

Execute the below command to deploy MySQL pod:

kubectl apply -f mysql-deployment.yaml

Output:

service/mysql created

Exposing MySQL as a Service

  1. Create a file called mysql-service.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
spec:
  selector:
    app: mysql
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306
  1. Execute the below command to create a service for MySQL:
kubectl apply -f mysql-service.yaml

Output:

service/mysql-service created
  1. To confirm if the service is created successfully, execute the below command:
kubectl get svc

Output:

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
echo1           ClusterIP   10.245.179.199   <none>        80/TCP     6h4m
echo2           ClusterIP   10.245.58.44     <none>        80/TCP     6h2m
kubernetes      ClusterIP   10.245.0.1       <none>        443/TCP    6h33m
mysql           ClusterIP   None             <none>        3306/TCP   4m57s
mysql-service   ClusterIP   10.245.159.76    <none>        3306/TCP   36s

The post How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/feed/ 0
Setup Windows Node with Kubernetes 1.14 https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/ https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/#respond Thu, 04 Apr 2019 21:40:30 +0000 https://blog.tekspace.io/index.php/2019/04/04/setup-windows-node-with-kubernetes-1-14/ Kubernetes 1.14 now provides out of the box support for Windows worker nodes to run windows containers with in a Kubernetes cluster. This feature was in preview for a long time, and now it is production ready. This is a wonderful opportunity for most cloud giant companies to start applying a new version of Kubernetes

The post Setup Windows Node with Kubernetes 1.14 appeared first on TEKSpace Blog.

]]>
Kubernetes 1.14 now provides out of the box support for Windows worker nodes to run windows containers with in a Kubernetes cluster. This feature was in preview for a long time, and now it is production ready. This is a wonderful opportunity for most cloud giant companies to start applying a new version of Kubernetes 1.14 to their offering. So that they can get their customers to start migrating their applications that run on Windows virtualization platform to windows containers quicker.

NOTE: Azure, Google, IBM, AWS now offer Kubernetes services that offer free cluster management, so you do not have to worry any longer. Windows containers are yet to be offered and should be available soon. Check out their official websites to find out more information on when they will be able to offer windows containers.

In this tutorial, I will go over how to Setup windows node and join that to existing Kubernetes cluster with 1.14 version. If you do not have a Kubernetes cluster and would like to learn how to set it up. Check out the below prerequisites.

Prerequisites

NOTE: Before we move forward, ensure you have successfully setup a Kubernetes 1.14 cluster on your Linux machine. If not, check out the prerequisites.

Enable mixed OS scheduling

The below guide was referenced from Microsoft documentation.

Login to your master node and execute the below commands.

cd ~ && mkdir -p kube/yaml && cd kube/yaml

Confirm kube-proxy DaemonSet is set to Rolling Update:

kubectl get ds/kube-proxy -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' --namespace=kube-system

Download node-selector-patch from GitHub:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/node-selector-patch.yml

Patch your kube-proxy

kubectl patch ds/kube-proxy --patch "$(cat node-selector-patch.yml)" -n=kube-system

Check the status of kube-proxy

kubectl get ds -n kube-system
[rahil@k8s-master-node yaml]$ kubectl get ds -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                               AGE
kube-flannel-ds-amd64     2         2         2       0            2           beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux   106m
kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm                                 106m
kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64                               106m
kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le                             106m
kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x                               106m
kube-proxy                2         2         2       2            2           beta.kubernetes.io/os=linux                                 21h

Your kube-proxy node selector status should show beta.kubernetes.io/os=linux get applied.

Setting up flannel networking

Below guide was referenced from Microsoft documentation.

Since I already have kube-flannel setup from previous tutorial, I will go ahead and edit it by following the below guide and update the values accordingly.

On your master node, edit kube-flannel and apply changes that are needed to configure windows worker node.

kubectl edit cm -n kube-system kube-flannel-cfg

If you already know how to use the vi editor, you should be able to navigate with in the edit mode. Go ahead and find the below block of code and update it with the as shown below:

cni-conf.json: |
    {
      "name": "vxlan0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }

And, your net-conf.json should look like this shown below:

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "VNI" : 4096,
        "Port": 4789
      }
    }

Once you have updated your kube-flannel configmap, go ahead and save it to apply those changes.

Target your kube-flannel to only Linux by executing the below command:

kubectl patch ds/kube-flannel-ds-amd64 --patch "$(cat node-selector-patch.yml)" -n=kube-system

Install Docker on your Windows node

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name Docker -ProviderName DockerMsftProvider
Restart-Computer -Force

Download and stage Kubernetes packages

Step 1: Open PowerShell as an administrator and execute the below command to create a directory called k.

mkdir c:\k; cd c:\k

Step 2: Download Kubernetes 1.14.0 from github and download kubernetes-node-windows-amd64.tar.gz.

Step 3: Extract the package to c:\k path on your Windows node.

NOTE: You may have to use a third party tool to extract tar and gz files. I recommend using portable 7zip from here. So that you don’t have to install it.

Find kubeadm,kubectl, kubelet, and kube-proxy and copy it on windows node under c:\k\. Should look like below.

Copy Kubernetes certificate file from master node

Go to your master node under ~/.kube/config of your user home directory and paste it to c:\k\config.

You can use xcopy or winscp to download config file from master node to windows node.

Add paths to environment variables

Open PowerShell as an administrator and execute the following commands:

$env:Path += ";C:\k"; $env:KUBECONFIG="C:\k\config"; [Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\k", [EnvironmentVariableTarget]::Machine); [Environment]::SetEnvironmentVariable("KUBECONFIG", "C:\k\config", [EnvironmentVariableTarget]::User)

Reboot your system before moving forward.

Joining Windows Server node to Master node

To join the Flannel network, execute the below command to download the script.

Step 1: Open PowerShell as Administrator:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
 wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/start.ps1 -o c:\k\start.ps1

Step 2: Navigate to c:\k\

cd c:\k\

Step 3: Execute the below command to join Flannel cluster

.\start.ps1 -ManagementIP 192.168.0.123 -NetworkMode overlay -InterfaceName Ethernet -Verbose

Replace ManagementIP with your Windows node IP address. You can execute ipconfig to get these details. To understand the above command, please refer to this guide from Microsoft.

PS C:\k> .\kubectl.exe get nodes
NAME                STATUS     ROLES    AGE   VERSION
k8s-master-node     Ready      master   35h   v1.14.0
k8s-worker-node-1   Ready      <none>   35h   v1.14.0
win-uq3cdgb5r7g     Ready      <none>   11m   v1.14.0

Testing windows containers

If everything went well, and you see your Windows node joined the cluster successfully. You can deploy a Windows container to test if everything is working as expected. Execute the below commands to deploy a Windows container.

Download YAML file:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/simpleweb.yml -O win-webserver.yaml

Create new deployment:

kubectl apply -f .\win-webserver.yaml

Check the status of container:

kubectl get pods -o wide -w

Output

PS C:\k> .\kubectl.exe get pods -o wide -w
NAME                            READY   STATUS              RESTARTS   AGE   IP       NODE              NOMINATED NODE   READINESS GATES
win-webserver-cfcdfb59b-fkqxg   0/1     ContainerCreating   0          40s   <none>   win-uq3cdgb5r7g   <none>           <none>
win-webserver-cfcdfb59b-jbm7s   0/1     ContainerCreating   0          40s   <none>   win-uq3cdgb5r7g   <none>           <none>

Troubleshooting

If you are receiving something like below. That means your kubeletwin/pause wasn’t built correctly. After spending several hours. I dig through all the script that start.ps1 script does, and I found out that whenever Docker image was built, it didn’t use the correct version of container image.

Issue

Error response from daemon: CreateComputeSystem 229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2: The container operating system does not match the host operating system.
(extra info: {"SystemType":"Container","Name":"229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2","Owner":"docker","VolumePath":"\\\\?\\Volume{d03ade10-14ef-4486-aa63-406f2a7e5048}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2","Layers":[{"ID":"7cf9a822-5cb5-5380-98c3-99885c3639f8","Path":"C:\\ProgramData\\docker\\windowsfilter\\83e740543f7683c25c7880388dbe2885f32250e927ab0f2119efae9f68da5178"},{"ID":"600d6d6b-8810-5bf3-ad01-06d0ba1f97a4","Path":"C:\\ProgramData\\docker\\windowsfilter\\529e04c75d56f948819cd62e4886d865d8faac7470be295e7116ddf47ca15251"},{"ID":"f185c0c0-eccf-5ff9-b9fb-2939562b75c3","Path":"C:\\ProgramData\\docker\\windowsfilter\\7640b81c6fff930a838e97c6c793b4fa9360b6505718aa84573999aa41223e80"}],"HostName":"229d5b8cf2ca","HvPartition":false,"EndpointList":["CE799786-A781-41ED-8B1F-C91DFEDB75A9"],"AllowUnqualifiedDNSQuery":true}).

Solution

  1. Go to c:\k\ and open Dockerfile.
  2. Update first line to FROM mcr.microsoft.com/windows/nanoserver:1809 and save the file.
  3. Execute the below command to build an image as administrator from a PowerShell console.
cd c:\k\; docker build -t kubeletwin/pause .
  1. Open win-webserver.yaml and update image tag to image: mcr.microsoft.com/windows/servercore:1809.
  2. Delete and Re-apply your deployment by executing the below command.
kubectl delete win-webserver
kubectl apply -f .\win-webserver.yaml

Now all your pods should show in running state.

PS C:\k> kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
win-webserver-cfcdfb59b-gk6g9   1/1     Running   0          6m44s
win-webserver-cfcdfb59b-q4zxz   1/1     Running   0          6m44s

The post Setup Windows Node with Kubernetes 1.14 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/feed/ 0
Getting started with Kubernetes on windows 10 with Hyper-V https://blog.tekspace.io/getting-started-with-kubernetes-on-windows-10-with-hyper-v/ https://blog.tekspace.io/getting-started-with-kubernetes-on-windows-10-with-hyper-v/#respond Sat, 23 Mar 2019 01:11:53 +0000 https://blog.tekspace.io/index.php/2019/03/23/getting-started-with-kubernetes-on-windows-10-with-hyper-v/ If you are interested in learning how Kubernetes works, you came to the right place! In this tutorial, I will show you how to quickly set up a local Kubernetes environment using Minikube to run hello-world app in a container. Pre-requisites Let’s ensure we have prerequisites installed before we get started with Minikube installation. If

The post Getting started with Kubernetes on windows 10 with Hyper-V appeared first on TEKSpace Blog.

]]>
If you are interested in learning how Kubernetes works, you came to the right place! In this tutorial, I will show you how to quickly set up a local Kubernetes environment using Minikube to run hello-world app in a container.

Pre-requisites

Let’s ensure we have prerequisites installed before we get started with Minikube installation. If you already have the below items installed, you can skip them and proceed with Minikube setup.

Installing Minikube

Step 1: Open the PowerShell console as an administrator.

Step 2: Install Minikube by executing the below command:

choco install minikube

NOTE: It may take a while to install Minikube depending on your network speed.

Verify your installation succeeded, and it should look like as shown below:

Setting up Hyper-V environment

Step 1: Open Hyper-V manager.

Step 2: Open Virtual Switch Manager from the Action panel on the right-hand side.

Step 3: Select New virtual network switch -> select External -> click on Create Virtual Switch.

Step 4: Name your virtual switch Minikube Virtual Switch, select correct LAN settings that give access to the internet and click on apply and ok.

NOTE: If you already have virtual switch created before, you may not need to set up virtual switch. This instruction is for newly installed hyper-v environments.

Starting Minikube

Open PowerShell as an administrator. If you already have it opened then you can ignore this step.

Execute the below command to start Minikube:

minikube start --vm-driver hyperv --hyperv-virtual-switch "Minikube Virtual Switch"

NOTE: If your virtual switch name is different from Minikube Virtual Switch, you can replace that in the above command.

Once Minikube has started, you will receive the below output, and you are ready to execute the Kubernetes command and deploy a container.

setup output:

o   minikube v0.35.0 on windows (amd64)
>   Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@   Downloading Minikube ISO ...
 184.42 MB / 184.42 MB [============================================] 100.00% 0s
-   "minikube" IP address is 192.168.0.106
-   Configuring Docker as the container runtime ...
-   Preparing Kubernetes environment ...
@   Downloading kubeadm v1.13.4
@   Downloading kubelet v1.13.4
:   Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns
-   Configuring cluster permissions ...
-   Verifying component health .....
+   kubectl is now configured to use "minikube"
=   Done! Thank you for using minikube!

To check if you have a successful connection to Kubernetes cluster, execute the below command from the PowerShell console.

kubectl get pods -n kube-system

Output:

NAME                               READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-87cj9           1/1     Running   0          3m5s
coredns-86c58d9df4-bg64j           1/1     Running   0          3m5s
etcd-minikube                      1/1     Running   0          2m9s
kube-addon-manager-minikube        1/1     Running   0          2m16s
kube-apiserver-minikube            1/1     Running   0          2m20s
kube-controller-manager-minikube   1/1     Running   0          2m14s
kube-proxy-wnxk4                   1/1     Running   0          3m5s
kube-scheduler-minikube            1/1     Running   0          2m23s
storage-provisioner                1/1     Running   0          2m4s

Deploying your first container

To deploy your first container in your Minikube cluster, execute deploy command.

kubectl create deployment hello-world --image=kitematic/hello-world-nginx

The above command will pull a new image from Docker hub from a public registry and run a container in your Minikube Kubernetes cluster.

To check the status of a container, execute the below command.

kubectl get pods

Output:

PS C:\Users\Ray> kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
hello-world-7cbc87684d-pmwxl   1/1     Running   0          4m50s

Accessing your hello-world app

In order to access your hello-world application outside of Kubernetes cluster, you need to expose it as a service.

kubectl expose deployment hello-world --port=80 --type=NodePort

Once the service has been created, you will need to get a port number of your exposed service to be able to browse the application. Execute the below command.

kubectl.exe get services

output:

PS C:\Users\Ray> kubectl.exe get services
NAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
hello-world   NodePort    10.96.79.70   <none>        80:31548/TCP   8m9s
kubernetes    ClusterIP   10.96.0.1     <none>        443/TCP        21h

When you expose a service, Kubernetes will automatically create a port number randomly. In my case it is 31548 for hello-world application. I will append this at the end of the Minikube node IP address to access the application.

To get Minikube IP address, execute the below command:

minikube ip

In my case, I received 192.168.0.112. I will use this IP address to access the application as shown in the below screenshot.

Kubernetes dashboard

Kubernetes provides a free dashboard to also manage your cluster via UI. To access Kubernetes dashboard, you will need to execute the below Minikube command.

minikube dashboard 

You will see below output upon initialization.

PS C:\Users\Ray\Downloads> minikube.exe dashboard
-   Enabling dashboard ...
-   Verifying dashboard health ...
-   Launching proxy ...
-   Verifying proxy health ...
-   Opening http://127.0.0.1:53782/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ in your default browser...

Once the initialization is completed, the dashboard should open in your default browser. If it didn’t open you can browse it by opening this link

You are all set! You can now navigate the dashboard and get familiar with pods, services, and many more.

Thank you for following this tutorial! Hope this helped you get started with Kubernetes.

The post Getting started with Kubernetes on windows 10 with Hyper-V appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/getting-started-with-kubernetes-on-windows-10-with-hyper-v/feed/ 0
Working With Kubernetes Services https://blog.tekspace.io/working-with-kubernetes-services/ https://blog.tekspace.io/working-with-kubernetes-services/#respond Mon, 26 Feb 2018 06:39:55 +0000 https://blog.tekspace.io/index.php/2018/02/26/working-with-kubernetes-services/ Kubernetes services allow pods to be able to communicate outside Kubernetes cluster. There are three types of services as follows: ClusterIPClusterIP is a default internal cluster IP address used when creating a service internal. This cluster IP is only accessible within the cluster and cannot be reached from outside network. When you do kubectl get

The post Working With Kubernetes Services appeared first on TEKSpace Blog.

]]>
Kubernetes services allow pods to be able to communicate outside Kubernetes cluster. There are three types of services as follows:

ClusterIP
ClusterIP is a default internal cluster IP address used when creating a service internal. This cluster IP is only accessible within the cluster and cannot be reached from outside network. When you do kubectl get services, you will see CLUSTER-IP in the header and that is the default IP address created for every service. Regardless of what service type it is.

NodePort
NodePort exposes port on all worker nodes in Kubernetes cluster. You can define the port manually or let Kubernetes cluster create one for you dynamically. NodePorts are exposed to each worker node and must be unique for each service.

Load balancer
Load balancer is another layer above NodePort. When creating load balancer service, it first creates NodePort and external load balancer virtual IP. Load balancers are specific to cloud providers, and can only be implemented on Azure, GCS, AWS, OpenStack, and OpenSwift.

Managing Kubernetes Services

List Services

The below command will list all the services in default namespace.

kubectl get services

Output:

[rahil@k8s-master-node ~]$ kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   6h

The below command will list all the services from all namespaces.

kubectl get services --all-namespaces

Output:

[rahil@k8s-master-node ~]$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP         6h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   6h

The post Working With Kubernetes Services appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/working-with-kubernetes-services/feed/ 0
Kubernetes Command line https://blog.tekspace.io/kubernetes-command-line/ https://blog.tekspace.io/kubernetes-command-line/#respond Mon, 26 Feb 2018 03:38:31 +0000 https://blog.tekspace.io/index.php/2018/02/25/kubernetes-command-line/ To manage a Kubernetes cluster, you need to know some basic commands to be able to manage the cluster. Below are basic command lines to manage pods, nodes, services, etc. Pods kubectl get pods is a command-line tool used in Kubernetes, a popular container orchestration platform, to retrieve information about pods that are running within

The post Kubernetes Command line appeared first on TEKSpace Blog.

]]>
To manage a Kubernetes cluster, you need to know some basic commands to be able to manage the cluster. Below are basic command lines to manage pods, nodes, services, etc.

Pods

kubectl get pods is a command-line tool used in Kubernetes, a popular container orchestration platform, to retrieve information about pods that are running within a cluster. Pods are the smallest deployable units in Kubernetes and can contain one or more containers.

When you run the kubectl get pods command, it queries the Kubernetes API server for information about the pods within the current context or specified namespace. The output of this command will typically display a table containing various columns that provide information about each pod. The columns might include:

  1. NAME: The name of the pod.
  2. READY: This column shows the number of containers in the pod that are ready out of the total number of containers.
  3. STATUS: The current status of the pod, which can be “Running,” “Pending,” “Succeeded,” “Failed,” or “Unknown.”
  4. RESTARTS: The number of times the containers in the pod have been restarted.
  5. AGE: The amount of time that the pod has been running since its creation.
  6. IP: The IP address assigned to the pod within the cluster network.
  7. NODE: The name of the node where the pod is scheduled to run.

Here’s an example output of the kubectl get pods command:

kubectl get pods

Output:

[rahil@k8s-master-node ~]$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-7587c6fdb6-dz7tj   1/1       Running   0          29m

The below command will watch for events and provide you status on console.

kubectl get pods -w

Output:

[rahil@k8s-master-node ~]$ kubectl get pods -w
NAME                     READY     STATUS              RESTARTS   AGE
nginx-7587c6fdb6-dz7tj   0/1       ContainerCreating   0          20s
nginx-7587c6fdb6-dz7tj   1/1       Running   0         58s

If you like to see more details on individual pods, you can use kubectl describe.

kubectl describe pods <pod_name>

Output:

[rahil@k8s-master-node ~]$ kubectl describe pods nginx
Name:           nginx-7587c6fdb6-dz7tj
Namespace:      default
Node:           k8s-worker-node-1/192.168.0.226
Start Time:     Sun, 25 Feb 2018 17:06:46 -0500
Labels:         pod-template-hash=3143729862
                run=nginx
Annotations:    <none>
Status:         Running
IP:             10.244.1.2
Controlled By:  ReplicaSet/nginx-7587c6fdb6
Containers:
  nginx:
    Container ID:   docker://22a5e181b9c7b56351ccdc9ed41c1f6dfd776e56d45e464399d9a92479657a18
    Image:          nginx
    Image ID:       docker-pullable://docker.io/nginx@sha256:4771d09578c7c6a65299e110b3ee1c0a2592f5ea2618d23e4ffe7a4cab1ce5de
    Port:           80/TCP
    State:          Running
      Started:      Sun, 25 Feb 2018 17:07:42 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jpflm (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  default-token-jpflm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jpflm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                        Message
  ----    ------                 ----  ----                        -------
  Normal  Scheduled              6m    default-scheduler           Successfully assigned nginx-7587c6fdb6-dz7tj to k8s-worker-node-1
  Normal  SuccessfulMountVolume  6m    kubelet, k8s-worker-node-1  MountVolume.SetUp succeeded for volume "default-token-jpflm"
  Normal  Pulling                6m    kubelet, k8s-worker-node-1  pulling image "nginx"
  Normal  Pulled                 6m    kubelet, k8s-worker-node-1  Successfully pulled image "nginx"
  Normal  Created                6m    kubelet, k8s-worker-node-1  Created container
  Normal  Started                5m    kubelet, k8s-worker-node-1  Started container

Nodes

The below command will get all the nodes.

kubectl get nodes

Output:

[rahil@k8s-master-node ~]$ kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
k8s-master-node     Ready     master    3h        v1.9.3
k8s-worker-node-1   Ready     <none>    2h        v1.9.3

The below command will provide you with all the information about nodes in table view mode.

kubectl get nodes -o wide

Output:

[rahil@k8s-master-node ~]$ kubectl get nodes -o wide
NAME                STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-master-node     Ready     master    3h        v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6
k8s-worker-node-1   Ready     <none>    2h        v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6

The below command will provide you with detail about a node.

kubectl describe nodes <node_name>

Output:

[rahil@k8s-master-node ~]$ kubectl describe nodes k8s-worker-node-1
Name:               k8s-worker-node-1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-worker-node-1
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"d2:bd:02:7a:cb:65"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=192.168.0.226
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Sun, 25 Feb 2018 14:42:46 -0500
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:42:46 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Sun, 25 Feb 2018 17:22:32 -0500   Sun, 25 Feb 2018 14:44:06 -0500   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.0.226
  Hostname:    k8s-worker-node-1
Capacity:
 cpu:     2
 memory:  916556Ki
 pods:    110
Allocatable:
 cpu:     2
 memory:  814156Ki
 pods:    110
System Info:
 Machine ID:                 46a73b56be064f2fb8d81ac54f2e0349
 System UUID:                8601D8C9-C1D9-2C4B-B6C9-2A3D2A598E4A
 Boot ID:                    963f0791-c81c-4bb1-aecd-6bd079abea67
 Kernel Version:             3.10.0-693.17.1.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.9.3
 Kube-Proxy Version:         v1.9.3
PodCIDR:                     10.244.1.0/24
ExternalID:                  k8s-worker-node-1
Non-terminated Pods:         (3 in total)
  Namespace                  Name                      CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                      ------------  ----------  ---------------  -------------
  default                    nginx-7587c6fdb6-dz7tj    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-flannel-ds-wtdz4     100m (5%)     100m (5%)   50Mi (6%)        50Mi (6%)
  kube-system                kube-proxy-m2hfm          0 (0%)        0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  100m (5%)     100m (5%)   50Mi (6%)        50Mi (6%)
Events:         <none>

Services

The below command will give you all the available services in default namespace.

kubectl get services

Output:

[rahil@k8s-master-node ~]$ kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3h

The below command will give you all the services in all namespaces.

kubectl get services --all-namespaces

Output:

[rahil@k8s-master-node ~]$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP         3h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   3h

The post Kubernetes Command line appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/kubernetes-command-line/feed/ 0
How to get status of all nodes in Kubernetes https://blog.tekspace.io/how-to-get-status-of-all-nodes-in-kubernetes/ https://blog.tekspace.io/how-to-get-status-of-all-nodes-in-kubernetes/#respond Mon, 26 Feb 2018 01:00:22 +0000 https://blog.tekspace.io/index.php/2018/02/25/how-to-get-status-of-all-nodes-in-kubernetes/ In this tutorial, I will show you how to get status of all the nodes in kubernetes cluster. To get status of all nodes, execute below command: output: To get more information about nodes, execute below command: output:

The post How to get status of all nodes in Kubernetes appeared first on TEKSpace Blog.

]]>
In this tutorial, I will show you how to get status of all the nodes in kubernetes cluster.

To get status of all nodes, execute below command:

kubectl get nodes

output:

[rahil@k8s-master-node ~]$ kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
k8s-master-node     Ready     master    41m       v1.9.3
k8s-worker-node-1   Ready     <none>    16m       v1.9.3

To get more information about nodes, execute below command:

kubectl get nodes -o wide

output:

[rahil@k8s-master-node ~]$ kubectl get nodes -o wide
NAME                STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-master-node     Ready     master    41m       v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6
k8s-worker-node-1   Ready     <none>    16m       v1.9.3    <none>        CentOS Linux 7 (Core)   3.10.0-693.17.1.el7.x86_64   docker://1.12.6

The post How to get status of all nodes in Kubernetes appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-get-status-of-all-nodes-in-kubernetes/feed/ 0
How to check status of pods in kubernetes https://blog.tekspace.io/how-to-check-status-of-pods-in-kubernetes/ https://blog.tekspace.io/how-to-check-status-of-pods-in-kubernetes/#respond Mon, 26 Feb 2018 00:55:43 +0000 https://blog.tekspace.io/index.php/2018/02/25/how-to-check-status-of-pods-in-kubernetes/ To check status of a pod in kubernetes cluster. Connect to master node and execute below command. Below command will give status of all pods from default namespace. kubectl get pods If you like to check the status of pods in system namespace, execute below command. kubectl get pods --all-namespaces If you like to check

The post How to check status of pods in kubernetes appeared first on TEKSpace Blog.

]]>

To check status of a pod in kubernetes cluster. Connect to master node and execute below command.

Below command will give status of all pods from default namespace.

kubectl get pods

If you like to check the status of pods in system namespace, execute below command.

kubectl get pods --all-namespaces

If you like to check the status of pods and node it is running on execute below command:

kubectl get pods -o wide

or

kubectl get pods --all-namespaces -o wide

The post How to check status of pods in kubernetes appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-check-status-of-pods-in-kubernetes/feed/ 0
Setup Kubernetes Cluster on CentOS 7 https://blog.tekspace.io/setup-kubernetes-cluster-on-centos-7/ https://blog.tekspace.io/setup-kubernetes-cluster-on-centos-7/#respond Sun, 25 Feb 2018 09:20:47 +0000 https://blog.tekspace.io/index.php/2018/02/25/setup-kubernetes-cluster-on-centos-7/ In this tutorial, we will use kubeadm to configure a Kubernetes cluster on CentOS 7.4. IMPORTANT NOTE: Ensure swap is disabled on both master and worker nodes. Kubernetes requires swap to be disabled in order for it to successfully configure Kubernetes Cluster. Before you start setting up Kubernetes cluster, it is recommended that you update

The post Setup Kubernetes Cluster on CentOS 7 appeared first on TEKSpace Blog.

]]>
In this tutorial, we will use kubeadm to configure a Kubernetes cluster on CentOS 7.4.

IMPORTANT NOTE: Ensure swap is disabled on both master and worker nodes. Kubernetes requires swap to be disabled in order for it to successfully configure Kubernetes Cluster.

Before you start setting up Kubernetes cluster, it is recommended that you update your system to ensure all security updates are up-to-date.

Execute the below command:

sudo yum update -y

Install Docker

In order to configure Kubernetes cluster, it is required to install Docker. Execute the below command to install Docker on your system.

sudo yum install -y docker

Enable & start Docker service.

sudo systemctl enable docker && sudo systemctl start docker

Verify Docker version is 1.12 and greater.

sudo docker version
[rahil@k8s-master ~]$ sudo docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
 Go version:      go1.8.3
 Git commit:      3e8e77d/1.12.6
 Built:           Tue Jan 30 09:17:00 2018
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
 Go version:      go1.8.3
 Git commit:      3e8e77d/1.12.6
 Built:           Tue Jan 30 09:17:00 2018
 OS/Arch:         linux/amd64

Install Kubernetes packages

Configure yum to install kubeadm, kubectl, and kubelet.

Copy the below content and execute on your CentOS.

sudo bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF'

Disable SELinux:

sudo setenforce 0

In order for Kubernetes cluster to communicate internally, we have to disable SELinux. Currently, SELinux is not fully supported. In the future, this may change when support for SELinux is improved.

Installing packages for Kubernetes:

On Master node:

sudo yum install -y kubelet kubeadm kubectl

Enable & start kubelet service:

sudo systemctl enable kubelet && sudo systemctl start kubelet

On worker nodes:

sudo yum install -y kubelet kubeadm kubectl

Copy the below content and execute on CentOS master and worker nodes. There has been an issue reported that traffic in IPTable has been routed incorrectly. The below settings will make sure IPTable is configured correctly.

sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF'

Execute the below command to apply the above changes.

sudo sysctl --system

Once you have completed the above steps on all your CentOS nodes, including master & worker nodes. Let’s go ahead and configure Master node.

Disable Firewall on Master node

Kubernets Cluster uses IPTables to manage inbound and outbound traffic. In order to avoid conflict, we will disable firewalld on the CentOS 7 system. If you prefer to keep the firewall enabled. I recommend allowing port 6443 to allow communication from worker node to master node.

Disable:

sudo systemctl disable firewalld

Stop:

sudo systemctl stop firewalld

Check status:

sudo systemctl status firewalld

Configuring Kubernetes Master node

On your CentOS master node, execute the following commands:

sudo kubeadm init --pod-network-cidr 10.244.0.0/16

NOTE: Ensure SWAP is disabled on all your CentOS systems. Kubernetes cluster configuration will fail if swap is not disabled.

Once the above command is completed, it will output Kubernetes cluster information. Please make sure that you have the token information somewhere safe. It will be needed to join worker nodes to Kubernetes cluster.

Output from kubeadm init:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token a2dc82.7e936a7ba007f01e 10.0.0.7:6443 --discovery-token-ca-cert-hash sha256:30aca9f9c04f829a13c925224b34c47df0a784e9ba94e132a983658a70ee2914

On Master node apply the below changes after kubeadm init successful configuration:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Configuring Pod Networking

Before we Setup worker nodes, we need to ensure pod networking is functional. Pod networking is also a dependency for kube-dns pod to manage pod dns.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Ensure all pods are in running status by executing the below command:

kubectl get pods --all-namespaces

It may take some time depending on your system configuration and network speed. It pulls all the images from online to run required pods in system namespace.

Once all the pods are in running status, let’s configure worker nodes.

Configure Kubernetes Worker nodes

To configure worker nodes to be part of a Kubernetes cluster. We need to use kubeadm join command with token received from master node.

Execute the below command to join worker node to Kubernetes Cluster.

sudo kubeadm join --token a2dc82.7e936a7ba007f01e 10.0.0.7:6443 --discovery-token-ca-cert-hash sha256:30aca9f9c04f829a13c925224b34c47df0a784e9ba94e132a983658a70ee2914

Once the node has joined the cluster, you will see similar output on your console.

[preflight] Running pre-flight checks.
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
        [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "10.0.0.8:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.0.8:6443"
[discovery] Requesting info from "https://10.0.0.8:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.0.8:6443"
[discovery] Successfully established connection with API Server "10.0.0.8:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

On Kubernetes master node, execute the below command to see node status. If you see node status ready, that means your worker node is ready to host pods.

kubectl get nodes

Output for the above command

[rahil@k8s-master-node ~]$  kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
k8s-master-node     Ready     master    28m       v1.9.3
k8s-worker-node-1   Ready     <none>    3m        v1.9.3

If you see worker node status ready, then you are ready to deploy pods on your worker nodes.

The post Setup Kubernetes Cluster on CentOS 7 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-kubernetes-cluster-on-centos-7/feed/ 0