Containers Archives - TEKSpace Blog https://blog.tekspace.io/tag/containers/ Tech tutorials for Linux, Kubernetes, PowerShell, and Azure Wed, 30 Aug 2023 15:22:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://blog.tekspace.io/wp-content/uploads/2023/09/cropped-Tekspace-logo-icon-32x32.png Containers Archives - TEKSpace Blog https://blog.tekspace.io/tag/containers/ 32 32 Deploying Kubernetes Dashboard in K3S Cluster https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/ https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/#respond Wed, 30 Aug 2023 15:02:19 +0000 https://blog.tekspace.io/index.php/2020/10/27/deploying-kubernetes-dashboard-in-k3s-cluster/ Get the latest Kubernetes Dashboard and deploy Create service account and role In admin-service user.yaml, enter the following values: In admin-user-role.yaml, enter the following values: Now apply changes to deploy it to K3S cluster: Expose service as NodePort to access from browser In edit mode change type: ClusterIP to type: NodePort. And save it. Your

The post Deploying Kubernetes Dashboard in K3S Cluster appeared first on TEKSpace Blog.

]]>
Get the latest Kubernetes Dashboard and deploy
GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
sudo k3s kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

Create service account and role

In admin-service user.yaml, enter the following values:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

In admin-user-role.yaml, enter the following values:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Now apply changes to deploy it to K3S cluster:

sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml

Expose service as NodePort to access from browser

sudo k3s kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

In edit mode change type: ClusterIP to type: NodePort. And save it. Your file should look like below:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-10-27T14:32:58Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  resourceVersion: "72638"
  selfLink: /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard
  uid: 8282464c-607f-4e40-ad5c-ee781e83d5f0
spec:
  clusterIP: 10.43.210.41
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30353
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

You can get the port number by executing the below command:

sudo k3s kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.43.210.41   <none>        443:30353/TCP   3h39m

In my Kubernetes cluster, I received a port number 30353 as shown in the above output. In your case, it might be different. This port is exposed on each worker node and master. You can browse one of your worker node IP addresses with port at the end, and you will see a login page.

Get token of a service account

sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token

It will output a token in your console. Grab that token and insert it in to token input box.

My Dashboard link: https://192.168.1.21:30353

All done!

The post Deploying Kubernetes Dashboard in K3S Cluster appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/deploying-kubernetes-dashboard-in-k3s-cluster/feed/ 0
Learn about CP command in Linux https://blog.tekspace.io/learn-about-cp-command-in-linux/ https://blog.tekspace.io/learn-about-cp-command-in-linux/#respond Tue, 15 Aug 2023 18:49:33 +0000 https://blog.tekspace.io/?p=1106 The cp command in Linux is used to copy files and directories from one location to another. It stands for “copy” and is an essential tool for managing and duplicating files. The basic syntax of the cp command is as follows: Here, source represents the file or directory you want to copy, and destination is

The post Learn about CP command in Linux appeared first on TEKSpace Blog.

]]>
The cp command in Linux is used to copy files and directories from one location to another. It stands for “copy” and is an essential tool for managing and duplicating files. The basic syntax of the cp command is as follows:

cp [options] source destination

Here, source represents the file or directory you want to copy, and destination is where you want to copy it to. Here’s a detailed explanation of the command and its options:

Options:

  • -r or --recursive: This option is used when you want to copy directories and their contents recursively. Without this option, the cp command will not copy directories.
  • -i or --interactive: This option prompts you before overwriting an existing destination file. It’s useful to prevent accidental overwrites.
  • -u or --update: This option copies only when the source file is newer than the destination file or when the destination file is missing.
  • -v or --verbose: This option displays detailed information about the files being copied, showing each file as it’s copied.
  • -p or --preserve: This option preserves the original file attributes such as permissions, timestamps, and ownership when copying.
  • -d or --no-dereference: This option is used to avoid dereferencing symbolic links; it copies symbolic links themselves instead of their targets.
  • --parents: This option preserves the directory structure when copying files, creating any necessary parent directories in the destination.
  • --backup: This option makes a backup copy of each existing destination file before overwriting it.

Source and Destination:

  • source: This is the path to the file or directory you want to copy. It can be either an absolute path or a relative path.
  • destination: This is the path to the location where you want to copy the source. It can also be either an absolute path or a relative path. If the destination is a directory, the source file or directory will be copied into it.

Examples:

Copy a file to a different location:

Copy a directory and its contents recursively:

cp -r directory/ /path/to/destination/

Copy multiple files to a directory:

cp file1.txt file2.txt /path/to/destination/

Copy with preserving attributes and prompting for overwrite:

cp -i -p file.txt /path/to/destination/

Copy a file, preserving directory structure:

cp --parents file.txt /path/to/destination/

Copy and create backup copies of existing files:

cp --backup=numbered file.txt /path/to/destination/

Remember that improper use of the cp command can lead to data loss, so be cautious, especially when using options like -i and -u. Always double-check your commands before pressing Enter.

Additional Examples:

Copy a File to a Different Location:

cp file.txt /path/to/destination/

Copy Multiple Files to a Directory:

cp file1.txt file2.txt /path/to/destination/

Copy Files and Preserve Attributes:

cp -p file.txt /path/to/destination/

Copy a File and Prompt Before Overwriting:

cp -i file.txt /path/to/destination/

Copy Only Newer Files:

cp -u newer.txt /path/to/destination/

Copy Files Verbosely (Display Detailed Information):

cp -v file1.txt file2.txt /path/to/destination/

Copy Files and Create Backup Copies:

cp --backup=numbered file.txt /path/to/destination/

Copy Symbolic Links (Not Dereferencing):

cp -d symlink.txt /path/to/destination/

Copy a File and Preserve Parent Directories:

cp --parents dir/file.txt /path/to/destination/

Copy a Directory and Its Contents to a New Directory:

cp -r directory/ new_directory/

Copy Files Using Wildcards:

cp *.txt /path/to/destination/

Copy Hidden Files:

cp -r source_directory/. destination_directory/

Copy Files Between Remote Servers (using SCP):

cp source_file remote_username@remote_host:/path/to/destination/

Copy Files Between Remote Servers (using SSH and tar):

tar cf - source_directory/ | ssh remote_host "cd /path/to/destination/ && tar xf -"

Copy Files Using Absolute Paths:

cp /absolute/path/to/source/file.txt /absolute/path/to/destination/

Copy Files with Progress Indicator (using rsync):

rsync -av --progress source/ /path/to/destination/

These examples should provide you with a variety of scenarios where the cp command can be used to copy files and directories in Linux. Remember to adjust the paths and options according to your specific needs.

The post Learn about CP command in Linux appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/learn-about-cp-command-in-linux/feed/ 0
Learn about LS command in Linux https://blog.tekspace.io/learn-about-ls-command-in-linux/ https://blog.tekspace.io/learn-about-ls-command-in-linux/#respond Tue, 15 Aug 2023 18:33:54 +0000 https://blog.tekspace.io/?p=1098 The ls command in Linux is used to list the files and directories in a specified directory. It provides various options and arguments to customize the output and the information displayed. Here’s a detailed explanation of the ls command: Basic Usage: Options: Examples: Note: The above examples cover some commonly used options of the ls

The post Learn about LS command in Linux appeared first on TEKSpace Blog.

]]>
The ls command in Linux is used to list the files and directories in a specified directory. It provides various options and arguments to customize the output and the information displayed. Here’s a detailed explanation of the ls command:

Basic Usage:

ls [options] [file/directory]

Options:

  1. -a or --all: Shows hidden files (files starting with a dot .).
  2. -l: Displays a detailed listing with additional information, such as permissions, owner, group, size, modification date, and name.
  3. -h: When used with -l, shows file sizes in a human-readable format (e.g., KB, MB, GB).
  4. -R or --recursive: Recursively lists files and subdirectories.
  5. -S: Sorts files by size.
  6. -t: Sorts files by modification time, with the most recently modified files listed first.
  7. -r or --reverse: Reverses the order of sorting.
  8. -G or --no-group: Suppresses the display of group information in the long format.
  9. -o: Displays the long format listing without group information.
  10. -i or --inode: Prints the index number (inode) of each file.
  11. --color: Enables colorized output for different types of files.
  12. --help: Displays the help message for the ls command.

Examples:

  1. List all files and directories in the current directory:
ls
  1. List all files (including hidden files) in a directory:
ls -a
  1. List files with detailed information (long format):
ls -l
  1. List files with human-readable file sizes and in long format:
ls -lh
  1. List files recursively in all subdirectories:
ls -R
  1. List files sorted by size:
ls -S
  1. List files sorted by modification time (newest first):
ls -t
  1. List files in reverse order of modification time (oldest first):
ls -tr
  1. List only directories (excluding files):
ls -d */
  1. List files with colored output:
ls --color

Note:

The above examples cover some commonly used options of the ls command. You can combine multiple options to customize the output further. Additionally, you can specify a file or directory as an argument to ls to list its contents. The ls command is versatile and provides various ways to view and organize file information in a directory. For a complete list of options and details, you can refer to the ls command’s manual page by running:

man ls

The post Learn about LS command in Linux appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/learn-about-ls-command-in-linux/feed/ 0
Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/ https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/#respond Mon, 26 Oct 2020 21:35:27 +0000 https://blog.tekspace.io/index.php/2020/10/26/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/ Setup K3S Cluster By default, Rancher K3S comes with Traefik 1.7. We will setup K3S without Traefik ingress in this tutorial. Execute the above command on master node 2 to setup HA.Validate cluster setup: Make sure you have HA Proxy Setup: We will use above command output value to join worker nodes: MetalLB Setup Create

The post Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal appeared first on TEKSpace Blog.

]]>
Setup K3S Cluster

By default, Rancher K3S comes with Traefik 1.7. We will setup K3S without Traefik ingress in this tutorial.

  1. Execute below command on master node 1.
curl -sfL https://get.k3s.io | sh -s - server   --datastore-endpoint="mysql://user:pass@tcp(ip_address:3306)/databasename" --disable traefik --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 192.168.1.2 --tls-san k3s.home.lab

Execute the above command on master node 2 to setup HA.
Validate cluster setup:

$ sudo kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k3s-master-1   Ready    master   3m9s   v1.18.9+k3s1

Make sure you have HA Proxy Setup:

##########################################################
#               Kubernetes AP ILB
##########################################################
frontend kubernetes-frontend
    bind 192.168.1.2:6443
    mode tcp
    option tcplog
    default_backend kubernetes-backend

backend kubernetes-backend
    mode tcp
    option tcp-check
    balance roundrobin
    server k3s-master-1 192.168.1.10:6443 check fall 3 rise 2
    server k3s-master-2 192.168.1.20:6443 check fall 3 rise 2
  1. Join worker nodes to K3S Cluster
    Get node token from one of the master node by executing below command:
sudo cat /var/lib/rancher/k3s/server/node-tokenK105c8c5de8deac516ebgd454r45547481d70625ee3e5200acdbe8ea071191debd4::server:gd5de354807077fde4259fd9632ea045454

We will use above command output value to join worker nodes:

curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.2:6443 K3S_TOKEN={{USE_TOKEN_FROM_ABOVE}} sh -
  1. Validate K3S cluster state:
NAME                STATUS   ROLES    AGE     VERSION
k3s-master-1        Ready    master   15m     v1.18.9+k3s1
k3s-worker-node-1   Ready    <none>   3m44s   v1.18.9+k3s1
k3s-worker-node-2   Ready    <none>   2m52s   v1.18.9+k3s1
k3s-master-2        Ready    master   11m     v1.18.9+k3s1

MetalLB Setup

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.4/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Create a file called metallb-config.yaml and enter the following values:

apiVersion: v1
kind: ConfigMap
metadata:
    namespace: metallb-system
    name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

Apply changes:

sudo kubectl apply -f metallb-config.yaml

Deploy sample application with service

kubectl create deploy nginx --image nginxkubectl expose deploy nginx --port 80

Check status:

$ kubectl get svc,pods
NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/kubernetes                                ClusterIP      10.43.0.1       <none>          443/TCP                      44m
service/nginx                                     ClusterIP      10.43.14.116    <none>          80/TCP                       31s

NAME                                                 READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-25lpb                            1/1     Running   0          59s

Nginx Ingress setup

In this tutorial, I will be using helm to setup Nginx ingress controller.

  1. Execute the following commands to setup Nginx ingress from client machine with helm, kubectl configured:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install home ingress-nginx/ingress-nginx

Check Ingress controller status:

kubectl --namespace default get services -o wide -w home-ingress-nginx-controller
  1. Setup Ingress by creating home-ingress.yaml and add below values. Replace example.io
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: home-ingress
  namespace: default
spec:
  rules:
    - host: example.io
      http:
        paths:
          - backend:
              serviceName: nginx
              servicePort: 80
            path: /

Execute command to apply:

 kubectl apply -f home-ingress.yaml

Check Status on Ingress with `kubectl get ing` command:

$ kubectl get ing
NAME           CLASS    HOSTS           ADDRESS         PORTS   AGE
home-ingress   <none>   example.io   192.168.1.240   80      8m26s

Letsencrypt setup

  1. Execute below command to create namespaces, pods, and other related configurations:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.0.3/cert-manager.yaml

Once above completes lets validate pods status.
2. Validate setup:

$ kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-cainjector-76c9d55b6f-cp2jf   1/1     Running   0          39s
cert-manager-79c5f9946-qkfzv               1/1     Running   0          38s
cert-manager-webhook-6d4c5c44bb-4mdgc      1/1     Running   0          38s
  1. Setup staging environment by applying the changes below. Update email:
vi staging_issure.yaml

and paste the below values and save the file:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: letsencrypt-staging
spec:
 acme:
   # The ACME server URL
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: john@example.com
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply changes:

kubectl apply -f staging_issure.yaml

We will apply production issure later in this tutorial. We should first test SSL settings prior to making changes to use production certificates.

SSL setup with LetsEncrypt and Nginx Ingress

Before proceeding here, please make sure your dns is setup correctly from your cloud provider or in your home lab to allow traffic from the internet. LetsEncrypt uses HTTP validation to issue certificates, and it needs to reach the correct dns alias from where the cert request has been initiated.

Create new ingress file as shown below:

vi home-ingress-ssl.yaml

Copy and paste in above file:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt-staging
  name: home-ingress
  namespace: default
spec:
  tls:
  - hosts:
    - example.io
    secretName: home-example-io-tls
  rules:
  - host: example.io
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

Apply changes:

kubectl apply -f home-ingress-ssl.yaml

Validate certificate creation:

kubectl describe certificate
Spec:
  Dns Names:
    example.io
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       Issuer
    Name:       letsencrypt-staging
  Secret Name:  home-example-io-tls
Status:
  Conditions:
    Last Transition Time:        2020-10-26T20:19:15Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      False
    Type:                        Ready
    Last Transition Time:        2020-10-26T20:19:18Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      True
    Type:                        Issuing
  Next Private Key Secret Name:  home-example-io-tls-76dqg
Events:
  Type    Reason     Age   From          Message
  ----    ------     ----  ----          -------
  Normal  Issuing    10s   cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  8s    cert-manager  Stored new private key in temporary Secret resource "home-example-io-tls-76dqg"
  Normal  Requested  4s    cert-manager  Created new CertificateRequest resource "home-example-io-tls-h98zf"

Now you can browse your dns URL and validate your certificate. If you see something like below, that means your LetsEncrypt certificate management has been setup successfully.

Set production issure to get valid certificate

Create production issure:

vi production-issure.yaml

Copy and paste the below values into the above file. Update email:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: letsencrypt-prod
spec:
 acme:
   # The ACME server URL
   server: https://acme-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: user@example.com
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-staging
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  nginx

Apply changes:

kubectl apply -f production-issure.yaml

Update home-ingress-ssl.yaml file you created earlier with below values:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt-prod
  name: home-ingress
  namespace: default
spec:
  tls:
  - hosts:
    - example.io
    secretName: home-example-io-tls
  rules:
  - host: example.io
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

Apply changes:

kubectl apply -f home-ingress-ssl.yaml

Validate changes:

NOTE: Give it some time as it may take 2-5 mins to get the cert request to complete.

kubectl describe certificate

Your output should look something like below to get a valid certificate.

Spec:
  Dns Names:
    example.io
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       Issuer
    Name:       letsencrypt-prod
  Secret Name:  home-example-io-tls
Status:
  Conditions:
    Last Transition Time:  2020-10-26T20:43:35Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2021-01-24T19:43:25Z
  Not Before:              2020-10-26T19:43:25Z
  Renewal Time:            2020-12-25T19:43:25Z
  Revision:                2
Events:
  Type    Reason     Age                From          Message
  ----    ------     ----               ----          -------
  Normal  Issuing    24m                cert-manager  Issuing certificate as Secret does not exist
  Normal  Generated  24m                cert-manager  Stored new private key in temporary Secret resource "home-example-io-tls-76dqg"
  Normal  Requested  24m                cert-manager  Created new CertificateRequest resource "home-example-io-tls-h98zf"
  Normal  Issuing    105s               cert-manager  Issuing certificate as Secret was previously issued by Issuer.cert-manager.io/letsencrypt-staging
  Normal  Reused     103s               cert-manager  Reusing private key stored in existing Secret resource "home-example-io-tls"
  Normal  Requested  100s               cert-manager  Created new CertificateRequest resource "home-example-io-tls-ccxgf"
  Normal  Issuing    30s (x2 over 23m)  cert-manager  The certificate has been successfully issued

Browse your application and check for a valid certificate. If it looks something like below, that means you have successfully requested a valid certificate from LetsEncrypt certificate authority.

The post Setup Kubernetes Cluster using K3S, MetalLB, LetsEncrypt on Bare Metal appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-kubernetes-cluster-using-k3s-metallb-letsencrypt-on-bare-metal/feed/ 0
Managing MySQL database using PHPMyAdmin in Kubernetes https://blog.tekspace.io/managing-mysql-database-using-php/ https://blog.tekspace.io/managing-mysql-database-using-php/#respond Tue, 15 Sep 2020 01:35:49 +0000 https://blog.tekspace.io/index.php/2020/09/15/managing-mysql-database-using-php/ Make sure you are familiar with connecting to a Kubernetes cluster, have the Nginx ingress controller configured with a certificate manager, and have a MySQL database pod deployed. Only then, can you proceed. Follow this guide if you do not have MySQL deployed. Follow this guide to set up Nginx ingress and cert manager. PhpMyAdmin

The post Managing MySQL database using PHPMyAdmin in Kubernetes appeared first on TEKSpace Blog.

]]>

Make sure you are familiar with connecting to a Kubernetes cluster, have the Nginx ingress controller configured with a certificate manager, and have a MySQL database pod deployed. Only then, can you proceed. Follow this guide if you do not have MySQL deployed. Follow this guide to set up Nginx ingress and cert manager.

PhpMyAdmin is a popular open source tool to manage MySQL database server. Learn how to create a deployment and expose it as a service to access PhpMyAdmin from the internet using Nginx ingress controller.

  1. Create a deployment file called phpmyadmin-deployment.yaml and paste the following values:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: phpmyadmin-deployment
  labels:
    app: phpmyadmin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: phpmyadmin
  template:
    metadata:
      labels:
        app: phpmyadmin
    spec:
      containers:
        - name: phpmyadmin
          image: phpmyadmin/phpmyadmin
          ports:
            - containerPort: 80
          env:
            - name: PMA_HOST
              value: mysql-service
            - name: PMA_PORT
              value: "3306"
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secrets
                  key: ROOT_PASSWORD

NOTE: ROOT_PASSWORD value will be consumed from Kubernetes secrets. If you want to learn more about Kubernetes secrets. Follow this guide.

  1. Execute the below command to create a new deployment:
kubectl apply -f phpmyadmin-deployment.yaml

Output:

deployment.apps/phpmyadmin-deployment created

Exposing PhpMyAdmin via Services

  1. Create new file called phpmyadmin-service.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: phpmyadmin-service
spec:
  type: NodePort
  selector:
    app: phpmyadmin
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  1. Execute the below command to create service:
kubectl apply -f phpmyadmin-service.yaml

Output:

service/phpmyadmin-service created

Once you are done with the above configurations, it’s time to exposePhpMyAdminn service via internet.

I use DigitalOcean-managed Kubernetes. I manage my own DNS, and DigitalOcean automatically creates a load balancer for my Nginx ingress controller. Once again if you want to follow this guide, it will be very helpful.

Nginx Ingress configuration

  1. Create a new YAML file called phpmyadmin-ingress.yaml and paste the following values:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - phpmyadmin.example.com
    secretName: echo-tls
  rules:
  - host: mydemo.example.com
    http:
      paths:
      - backend:
          serviceName: phpmyadmin-service
          servicePort: 80
  1. Apply the changes:
kubectl apply -f mydemo-ingress.yaml

The post Managing MySQL database using PHPMyAdmin in Kubernetes appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/managing-mysql-database-using-php/feed/ 0
How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/ https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/#respond Mon, 14 Sep 2020 18:40:34 +0000 https://blog.tekspace.io/index.php/2020/09/14/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/ NOTE: This tutorial assumes you know how to connect to a Kubernetes cluster. Create secrets to securely store MySQL credentials Output: Persistant volume and MySQL deployment Execute the below command to create persistent volume: Output: Execute the below command to deploy MySQL pod: Output: Exposing MySQL as a Service Output: Output:

The post How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster appeared first on TEKSpace Blog.

]]>

NOTE: This tutorial assumes you know how to connect to a Kubernetes cluster.

Create secrets to securely store MySQL credentials

  1. Guide on how to create base 64 encoded values:

    Windows 10 guideLinux guide

  2. Create a new file called: mysql-secret.yaml and paste the value below.
    NOTE: You must first capture the value in base 64 by following the guide in step 1.
---
apiVersion: v1
kind: Secret
metadata:
  name: mysqldb-secrets
type: Opaque
data:
  ROOT_PASSWORD: c3VwZXItc2VjcmV0LXBhc3N3b3JkLWZvci1zcWw= 
  1. Execute the below command to create secrets:
kubectl apply -f mysql-secret.yaml

Output:

secret/mysqldb-secrets created
  1. To see if the secret is created, execute the below command:
kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-jqq69   kubernetes.io/service-account-token   3      6h20m
echo-tls              kubernetes.io/tls                     2      5h19m
mysqldb-secrets       Opaque                                1      42s
  1. To see the description of the secret, execute the below command:
kubectl describe secret mysqldb-secrets
Name:         mysqldb-secrets
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
ROOT_PASSWORD:  29 bytes

Persistant volume and MySQL deployment

  1. Create a persistent volume YAML file called: mysql-pvc.yaml and paste the following values:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pvc
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/mysql-data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage
  1. Create a new deployment YAML file called: mysql-deployment.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysqldb-secrets
              key: ROOT_PASSWORD
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc-claim

Execute the below command to create persistent volume:

kubectl apply -f mysql-pvc.yaml

Output:

persistentvolume/mysql-pvc createdpersistentvolumeclaim/mysql-pvc-claim created

Execute the below command to deploy MySQL pod:

kubectl apply -f mysql-deployment.yaml

Output:

service/mysql created

Exposing MySQL as a Service

  1. Create a file called mysql-service.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
spec:
  selector:
    app: mysql
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306
  1. Execute the below command to create a service for MySQL:
kubectl apply -f mysql-service.yaml

Output:

service/mysql-service created
  1. To confirm if the service is created successfully, execute the below command:
kubectl get svc

Output:

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
echo1           ClusterIP   10.245.179.199   <none>        80/TCP     6h4m
echo2           ClusterIP   10.245.58.44     <none>        80/TCP     6h2m
kubernetes      ClusterIP   10.245.0.1       <none>        443/TCP    6h33m
mysql           ClusterIP   None             <none>        3306/TCP   4m57s
mysql-service   ClusterIP   10.245.159.76    <none>        3306/TCP   36s

The post How to Deploy MySQL database on Digital Ocean Managed Kubernetes Cluster appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-deploy-mysql-database-on-digital-ocean-managed-kubernetes-cluster/feed/ 0
Setup Windows Node with Kubernetes 1.14 https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/ https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/#respond Thu, 04 Apr 2019 21:40:30 +0000 https://blog.tekspace.io/index.php/2019/04/04/setup-windows-node-with-kubernetes-1-14/ Kubernetes 1.14 now provides out of the box support for Windows worker nodes to run windows containers with in a Kubernetes cluster. This feature was in preview for a long time, and now it is production ready. This is a wonderful opportunity for most cloud giant companies to start applying a new version of Kubernetes

The post Setup Windows Node with Kubernetes 1.14 appeared first on TEKSpace Blog.

]]>
Kubernetes 1.14 now provides out of the box support for Windows worker nodes to run windows containers with in a Kubernetes cluster. This feature was in preview for a long time, and now it is production ready. This is a wonderful opportunity for most cloud giant companies to start applying a new version of Kubernetes 1.14 to their offering. So that they can get their customers to start migrating their applications that run on Windows virtualization platform to windows containers quicker.

NOTE: Azure, Google, IBM, AWS now offer Kubernetes services that offer free cluster management, so you do not have to worry any longer. Windows containers are yet to be offered and should be available soon. Check out their official websites to find out more information on when they will be able to offer windows containers.

In this tutorial, I will go over how to Setup windows node and join that to existing Kubernetes cluster with 1.14 version. If you do not have a Kubernetes cluster and would like to learn how to set it up. Check out the below prerequisites.

Prerequisites

NOTE: Before we move forward, ensure you have successfully setup a Kubernetes 1.14 cluster on your Linux machine. If not, check out the prerequisites.

Enable mixed OS scheduling

The below guide was referenced from Microsoft documentation.

Login to your master node and execute the below commands.

cd ~ && mkdir -p kube/yaml && cd kube/yaml

Confirm kube-proxy DaemonSet is set to Rolling Update:

kubectl get ds/kube-proxy -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' --namespace=kube-system

Download node-selector-patch from GitHub:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/node-selector-patch.yml

Patch your kube-proxy

kubectl patch ds/kube-proxy --patch "$(cat node-selector-patch.yml)" -n=kube-system

Check the status of kube-proxy

kubectl get ds -n kube-system
[rahil@k8s-master-node yaml]$ kubectl get ds -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                               AGE
kube-flannel-ds-amd64     2         2         2       0            2           beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux   106m
kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm                                 106m
kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64                               106m
kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le                             106m
kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x                               106m
kube-proxy                2         2         2       2            2           beta.kubernetes.io/os=linux                                 21h

Your kube-proxy node selector status should show beta.kubernetes.io/os=linux get applied.

Setting up flannel networking

Below guide was referenced from Microsoft documentation.

Since I already have kube-flannel setup from previous tutorial, I will go ahead and edit it by following the below guide and update the values accordingly.

On your master node, edit kube-flannel and apply changes that are needed to configure windows worker node.

kubectl edit cm -n kube-system kube-flannel-cfg

If you already know how to use the vi editor, you should be able to navigate with in the edit mode. Go ahead and find the below block of code and update it with the as shown below:

cni-conf.json: |
    {
      "name": "vxlan0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }

And, your net-conf.json should look like this shown below:

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "VNI" : 4096,
        "Port": 4789
      }
    }

Once you have updated your kube-flannel configmap, go ahead and save it to apply those changes.

Target your kube-flannel to only Linux by executing the below command:

kubectl patch ds/kube-flannel-ds-amd64 --patch "$(cat node-selector-patch.yml)" -n=kube-system

Install Docker on your Windows node

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name Docker -ProviderName DockerMsftProvider
Restart-Computer -Force

Download and stage Kubernetes packages

Step 1: Open PowerShell as an administrator and execute the below command to create a directory called k.

mkdir c:\k; cd c:\k

Step 2: Download Kubernetes 1.14.0 from github and download kubernetes-node-windows-amd64.tar.gz.

Step 3: Extract the package to c:\k path on your Windows node.

NOTE: You may have to use a third party tool to extract tar and gz files. I recommend using portable 7zip from here. So that you don’t have to install it.

Find kubeadm,kubectl, kubelet, and kube-proxy and copy it on windows node under c:\k\. Should look like below.

Copy Kubernetes certificate file from master node

Go to your master node under ~/.kube/config of your user home directory and paste it to c:\k\config.

You can use xcopy or winscp to download config file from master node to windows node.

Add paths to environment variables

Open PowerShell as an administrator and execute the following commands:

$env:Path += ";C:\k"; $env:KUBECONFIG="C:\k\config"; [Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\k", [EnvironmentVariableTarget]::Machine); [Environment]::SetEnvironmentVariable("KUBECONFIG", "C:\k\config", [EnvironmentVariableTarget]::User)

Reboot your system before moving forward.

Joining Windows Server node to Master node

To join the Flannel network, execute the below command to download the script.

Step 1: Open PowerShell as Administrator:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
 wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/start.ps1 -o c:\k\start.ps1

Step 2: Navigate to c:\k\

cd c:\k\

Step 3: Execute the below command to join Flannel cluster

.\start.ps1 -ManagementIP 192.168.0.123 -NetworkMode overlay -InterfaceName Ethernet -Verbose

Replace ManagementIP with your Windows node IP address. You can execute ipconfig to get these details. To understand the above command, please refer to this guide from Microsoft.

PS C:\k> .\kubectl.exe get nodes
NAME                STATUS     ROLES    AGE   VERSION
k8s-master-node     Ready      master   35h   v1.14.0
k8s-worker-node-1   Ready      <none>   35h   v1.14.0
win-uq3cdgb5r7g     Ready      <none>   11m   v1.14.0

Testing windows containers

If everything went well, and you see your Windows node joined the cluster successfully. You can deploy a Windows container to test if everything is working as expected. Execute the below commands to deploy a Windows container.

Download YAML file:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/simpleweb.yml -O win-webserver.yaml

Create new deployment:

kubectl apply -f .\win-webserver.yaml

Check the status of container:

kubectl get pods -o wide -w

Output

PS C:\k> .\kubectl.exe get pods -o wide -w
NAME                            READY   STATUS              RESTARTS   AGE   IP       NODE              NOMINATED NODE   READINESS GATES
win-webserver-cfcdfb59b-fkqxg   0/1     ContainerCreating   0          40s   <none>   win-uq3cdgb5r7g   <none>           <none>
win-webserver-cfcdfb59b-jbm7s   0/1     ContainerCreating   0          40s   <none>   win-uq3cdgb5r7g   <none>           <none>

Troubleshooting

If you are receiving something like below. That means your kubeletwin/pause wasn’t built correctly. After spending several hours. I dig through all the script that start.ps1 script does, and I found out that whenever Docker image was built, it didn’t use the correct version of container image.

Issue

Error response from daemon: CreateComputeSystem 229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2: The container operating system does not match the host operating system.
(extra info: {"SystemType":"Container","Name":"229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2","Owner":"docker","VolumePath":"\\\\?\\Volume{d03ade10-14ef-4486-aa63-406f2a7e5048}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2","Layers":[{"ID":"7cf9a822-5cb5-5380-98c3-99885c3639f8","Path":"C:\\ProgramData\\docker\\windowsfilter\\83e740543f7683c25c7880388dbe2885f32250e927ab0f2119efae9f68da5178"},{"ID":"600d6d6b-8810-5bf3-ad01-06d0ba1f97a4","Path":"C:\\ProgramData\\docker\\windowsfilter\\529e04c75d56f948819cd62e4886d865d8faac7470be295e7116ddf47ca15251"},{"ID":"f185c0c0-eccf-5ff9-b9fb-2939562b75c3","Path":"C:\\ProgramData\\docker\\windowsfilter\\7640b81c6fff930a838e97c6c793b4fa9360b6505718aa84573999aa41223e80"}],"HostName":"229d5b8cf2ca","HvPartition":false,"EndpointList":["CE799786-A781-41ED-8B1F-C91DFEDB75A9"],"AllowUnqualifiedDNSQuery":true}).

Solution

  1. Go to c:\k\ and open Dockerfile.
  2. Update first line to FROM mcr.microsoft.com/windows/nanoserver:1809 and save the file.
  3. Execute the below command to build an image as administrator from a PowerShell console.
cd c:\k\; docker build -t kubeletwin/pause .
  1. Open win-webserver.yaml and update image tag to image: mcr.microsoft.com/windows/servercore:1809.
  2. Delete and Re-apply your deployment by executing the below command.
kubectl delete win-webserver
kubectl apply -f .\win-webserver.yaml

Now all your pods should show in running state.

PS C:\k> kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
win-webserver-cfcdfb59b-gk6g9   1/1     Running   0          6m44s
win-webserver-cfcdfb59b-q4zxz   1/1     Running   0          6m44s

The post Setup Windows Node with Kubernetes 1.14 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/feed/ 0
Setup Kubernetes 1.14 Cluster on CentOS 7.6 https://blog.tekspace.io/setup-kubernetes-1-14-cluster-on-centos-7-6/ https://blog.tekspace.io/setup-kubernetes-1-14-cluster-on-centos-7-6/#respond Wed, 03 Apr 2019 01:56:15 +0000 https://blog.tekspace.io/index.php/2019/04/03/setup-kubernetes-1-14-cluster-on-centos-7-6/ This tutorial will showcase the step-by-step process of setting up a Kubernetes 1.14 Cluster, allowing organizations and communities to leverage exciting new features that have been eagerly anticipated. Especially when it comes to Windows containers. Kubernetes now offers windows containers out of the box and allows you to add windows nodes to a Kubernetes cluster.

The post Setup Kubernetes 1.14 Cluster on CentOS 7.6 appeared first on TEKSpace Blog.

]]>
This tutorial will showcase the step-by-step process of setting up a Kubernetes 1.14 Cluster, allowing organizations and communities to leverage exciting new features that have been eagerly anticipated. Especially when it comes to Windows containers. Kubernetes now offers windows containers out of the box and allows you to add windows nodes to a Kubernetes cluster.

NOTE: Please make sure swap is disabled on master and worker nodes for Kubernetes setup to be successful. You can follow the guide from here.

Prerequisites

  • Swap is disabled.
  • You should know how to install CentOS 7 and knowledge of sudo user.

Master node Tutorial

System updates

Let’s go ahead and first update our Linux system with all security patches or any other upgrades that will ensure our system is up-to-date.

sudo yum update -y

After your system has been updated, we are now ready to set up a Kubernetes cluster. We will first set up Docker and then setup Kubernetes.

Install and setup Master and Worker Nodes

Please ensure you have applied the following steps to both master and worker nodes before moving on to steps specific to each node. The below steps are common for both master and worker nodes.

Install and Setup Docker

Execute the below command to install Docker.

sudo yum install -y docker

Now we need to enable and start Docker as a service.

sudo systemctl enable docker && sudo systemctl start docker

To verify if you have Docker version 1.13 and higher, execute the below command.

sudo docker version

Install Kubernetes packages

In order to grab the latest package for Kubernetes, we need to configure our yum repository. Copy and paste the page below line of code to create a new config file for Kubernetes yum repo.

sudo bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF'

Disable SELinux to prevent any communication issues on all the nodes.

sudo setenforce 0sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

After the installation is completed, let’s enable kubelet as a service.

sudo systemctl enable kubelet && sudo systemctl start kubelet

Master node setup

Allow 6443 and 10250 from firewalld on master node

sudo firewall-cmd --permanent --add-port=6443/tcp && sudo firewall-cmd --permanent --add-port=10250/tcp && sudo firewall-cmd --reload

NOTE: If you do not execute the above commands, you will see the below warning during Kubernetes initialization.
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open, or your cluster may not function correctly.
error execution phase preflight: [preflight] Some fatal errors occurred:

Set IPTables settings
Copy and paste the below line of code on your master and worker node.

sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

Apply changes by execute the below command.

sudo sysctl --system

Load br_netfilter module

sudo lsmod | grep br_netfilter

Configure Kubernetes Master node

Now that we are done installing the required packages and configuration of your system. Let’s go ahead and start the configuration.

First we need to get all the images that are going to be used during Kubernetes initialization. It is optional and kubeadm will automatically pull it during initialization. But I recommend it to first get the images, so you don’t have to worry about images.

sudo kubeadm config images pull

After it is pulled all the images let’s get started with cluster setup.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

During initialization, if you received the following error, that means you didn’t disable your swap. Please go ahead and disable swap and reboot your system and try again.

[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

Or, if you received the below error, ensure you have applied the correct IPTable settings as provided above.

[ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

After you have successfully set up your Kubernetes cluster, your output should look similar to below. Please make a note of the key for joining worker nodes to Kubernetes cluster.

[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-node kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-node localhost] and IPs [192.168.0.120 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-node localhost] and IPs [192.168.0.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.501860 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master-node as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3j2pkk.xk7tnltycyz2xh5n
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.120:6443 --token khm95w.mo0wwenu2o9hglls \
    --discovery-token-ca-cert-hash sha256:aeb0ca593b63c8d674719858fd2397825825cebc552e3c165f00edb9671d6e32

Adding cluster settings to regular users to be able to access Kubernetes cluster locally.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Apply network settings for pods

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

That’s it! You have your master node setup.

Worker node setup

If you have more than one node, apply the below steps to each worker node.

Use the information received from your master node to join the cluster. The below information may be different for you.

sudo kubeadm join 192.168.0.120:6443 --token khm95w.mo0wwenu2o9hglls \
    --discovery-token-ca-cert-hash sha256:aeb0ca593b63c8d674719858fd2397825825cebc552e3c165f00edb9671d6e32

If you receive the following output. That means you have successfully connected to the Kubernetes master node.

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

To check if your node joined the cluster, execute the below command from master node.

kubectl get nodes

Your output should look something like below.

NAME                STATUS     ROLES    AGE     VERSION
k8s-master-node     NotReady   master   37m     v1.14.0
k8s-worker-node-1   NotReady   <none>   8m19s   v1.14.0

Troubleshooting Master node issues

If you are receiving CrashLoopBackOff for coredns as shown below, it is likely your firewalld on worker node is blocking connectivity with the master node.

[rahil@k8s-master-node ~]$ kubectl get pods -A -o wide
NAMESPACE     NAME                                      READY   STATUS             RESTARTS   AGE   IP              NODE                NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-9jd8r                   0/1     CrashLoopBackOff   15         19h   10.244.1.7      k8s-worker-node-1   <none>           <none>
kube-system   coredns-fb8b8dccf-kfjsz                   0/1     CrashLoopBackOff   15         19h   10.244.1.6      k8s-worker-node-1   <none>           <none>

The recommended solution is to stop firewalld completely to resolve this issue. Execute the below command to stop firewalld on your worker nodes.

sudo systemctl disable firewalld && sudo systemctl stop firewalld && sudo systemctl status firewalld
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'

If your master node setup wasn’t successful, and you are seeing the above errors, I recommend starting all over again. I spent several hours and ended up re-imaging my VM and reconfiguring all of it step by step. Good luck!

Optional but recommended setting for Kubernetes dashboard

From master node execute below command to create new deployment for Kubernetes dashboard. You can also reference it from their github site as well for more information.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

The post Setup Kubernetes 1.14 Cluster on CentOS 7.6 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-kubernetes-1-14-cluster-on-centos-7-6/feed/ 0
Manage Docker Images locally and in remote Container Registry https://blog.tekspace.io/manage-docker-images-locally-and-in-remote-container-registry/ https://blog.tekspace.io/manage-docker-images-locally-and-in-remote-container-registry/#respond Sun, 31 Mar 2019 21:02:32 +0000 https://blog.tekspace.io/index.php/2019/03/31/manage-docker-images-locally-and-in-remote-container-registry/ Managing Docker images is very important. Just as similar to managing application source code in a version controlled repository such as GIT. Docker also provides similar capabilities. Docker images can be managed locally on your development machine and also on remote container registry also known as Docker hub. In this tutorial, I will demonstrate a

The post Manage Docker Images locally and in remote Container Registry appeared first on TEKSpace Blog.

]]>
Managing Docker images is very important. Just as similar to managing application source code in a version controlled repository such as GIT. Docker also provides similar capabilities. Docker images can be managed locally on your development machine and also on remote container registry also known as Docker hub.

In this tutorial, I will demonstrate a set of commands on how to manage Docker images both locally and remotely.

Prerequisite

Managing images locally

1. List Docker Images

To view Docker images locally, you can type Docker images, and it will list all the images in the console. Execute the below command from an elevated PowerShell or command line tool to see the output.

docker images

Output

REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
demo/webappcore                        2.2.0               9729270fe1ac        38 hours ago        401MB
<none>                                 <none>              2822fcdec81d        38 hours ago        403MB
<none>                                 <none>              807f7b4b42c1        38 hours ago        398MB
<none>                                 <none>              659fbabfde96        38 hours ago        398MB
<none>                                 <none>              ad0df2c81cf1        38 hours ago        397MB
<none>                                 <none>              97a33d1a133d        38 hours ago        395MB
mcr.microsoft.com/dotnet/core/aspnet   2.2                 36e5a01ef28f        3 days ago          395MB
hello-world                            nanoserver          7dddd19ddc59        2 months ago        333MB
nanoserver/iis                         latest              7eac2eab1a5c        9 months ago        1.29GB

NOTE: I am using Windows 10 to demonstrate Docker image management.

Next, we will limit the list and only print images by repository name. The command is Docker images [repository name].

docker images hello-world

Output

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         nanoserver          7dddd19ddc59        2 months ago        333MB

You can also add an addition filter to the above command by providing tag. The command is docker images [repository name]:[tag].

docker images hello-world:nanaserver

2. Deleting Docker images

Sometime you may want to just delete an image that you no longer need. By deleting the image, it will permanently delete an image from your local. The command is docker rmi [image name]:[tag].

NOTE: There are many ways to delete an image. You can also reference Docker official website.

docker rmi hello-world:nanoserver

NOTE: If you see an error like this “Error response from daemon: conflict: unable to remove repository reference "hello-world:nanoserver" (must force) - container 3db17e64fadb is using its referenced image 7dddd19ddc59“. It means there is a container running locally on your machine, and it has been referenced.

To remove it forcefully, you can apply -f at the end of the above command. It will stop the running container and delete the image.

docker rmi hello-world:nanoserver -f

You should see almost similar output shown below.

PS C:\> docker rmi hello-world:nanoserver -f
Untagged: hello-world:nanoserver
Untagged: hello-world@sha256:ea56d430e69850b80cd4969b2cbb891db83890c7bb79f29ae81f3d0b47a58dd9
Deleted: sha256:7dddd19ddc595d0cbdfb0ae0a61e1a4dcf8f35eb4801957a116ff460378850da

Now, let’s go ahead and execute docker images. You will see your image was successfully deleted.

PS C:\> docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
demo/webappcore                        2.2.0               9729270fe1ac        38 hours ago        401MB
<none>                                 <none>              2822fcdec81d        38 hours ago        403MB
<none>                                 <none>              807f7b4b42c1        39 hours ago        398MB
<none>                                 <none>              659fbabfde96        39 hours ago        398MB
<none>                                 <none>              ad0df2c81cf1        39 hours ago        397MB
<none>                                 <none>              97a33d1a133d        39 hours ago        395MB
mcr.microsoft.com/dotnet/core/aspnet   2.2                 36e5a01ef28f        3 days ago          395MB
nanoserver/iis                         latest              7eac2eab1a5c        9 months ago        1.29GB

Managing Docker images on remote container registry

In this tutorial, we will use Docker hub, which gives you one free private repository and unlimited public repositories. In this tutorial, I will demonstrate how to use a private container registry that is only accessible to you or your organization.

Step 1: First, we need to register to get access to Docker hub. Once you have successfully registered and signed in. We are now ready to create our first private repository for Docker images.

Step 2: In the below example, I will show you how to push demo/webappcore image to remote repository. Execute the below command to push Docker image to Docker hub.

Login to Docker hub.

docker login --username tekspacedemo

Then it will prompt you for password. Go ahead and type the password.

NOTE: Replace tekspacedemo with your registered docker id.

Before we move on to push the local image to remote registory. We need to tag it to remote path so that we can push that to docker hub.

docker tag demo/webappcore:2.2.0 tekspacedemo/demo:2.2.0

The above command will tag local Docker image path demo/webappcore:2.2.0 to the path that matches for remote repository path, which in my case is tekspacedemo/demo:2.2.0.

Now we will push a local image from tekspacedemo repository.

docker push tekspacedemo/demo:2.2.0

NOTE: for the demo purpose, I created a remote repo name called demo. You can use any name to define repository name.

After your image is pushed to remote container registry. Your output should look like below

PS C:\> docker push tekspacedemo/demo:2.2.0
The push refers to repository [docker.io/tekspacedemo/demo]
a034775f3ab9: Pushed
7e886042ad70: Pushed
6c8276f92903: Pushed
596811bf044f: Pushed
84ff941997ac: Pushed
673fa658bebd: Pushed
75932c99c074: Pushed
3d57d631c3a7: Pushed
63077ec902e9: Pushed
f762a63f047a: Skipped foreign layer
2.2.0: digest: sha256:161bdf178437534bda10551406944e1292b71fa40075f00a29851f6fd7d7d020 size: 2505

That’s it! You have successfully pushed your first local repo to a remote container repository. Thank you for following this tutorial. Please comment below and feel free to share your feedback.

The post Manage Docker Images locally and in remote Container Registry appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/manage-docker-images-locally-and-in-remote-container-registry/feed/ 0
Container 101 https://blog.tekspace.io/container-101/ https://blog.tekspace.io/container-101/#respond Sat, 30 Mar 2019 01:55:09 +0000 https://blog.tekspace.io/index.php/2019/03/30/container-101/ What is a Container? A Container is a stripped-down lightweight image of an operating system, which is then bundled with the application package and other dependencies to run a Container in an isolated process. A container shares the core components such as kernel, network drivers, system tools, libraries, and many other settings from the host

The post Container 101 appeared first on TEKSpace Blog.

]]>
What is a Container?

A Container is a stripped-down lightweight image of an operating system, which is then bundled with the application package and other dependencies to run a Container in an isolated process. A container shares the core components such as kernel, network drivers, system tools, libraries, and many other settings from the host operating system. The purpose of a Container is to have a standard image that is packaged, published, and deployed to a cluster such as Docker swarm or Kubernetes. To learn more about containers history, please visit the official Docker website.

What is a Docker image?

A Docker image contains stripped down operating system image, your application code, and configuration required to run in any Container cluster such as Kubernetes or Docker swarm. An image is built using a Dockerfile that is setup using a set of commands to compile with all the dependencies. To learn more on how to write a Docker file. Please visit this website.

Follow the below guide to get started with Containers

Beginners guide

Follow the below guide in order to get the most out of it.

Intermediate guide

Advanced guide

Thank you for following this tutorial. Please comment and subscribe to receive new tutorials in email.

The post Container 101 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/container-101/feed/ 0