Docker Archives - TEKSpace Blog https://blog.tekspace.io/tag/docker/ Tech tutorials for Linux, Kubernetes, PowerShell, and Azure Tue, 19 Mar 2024 19:59:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://blog.tekspace.io/wp-content/uploads/2023/09/cropped-Tekspace-logo-icon-32x32.png Docker Archives - TEKSpace Blog https://blog.tekspace.io/tag/docker/ 32 32 How to create docker registry credentials using kubectl https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/ https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/#respond Tue, 19 Mar 2024 19:55:55 +0000 https://blog.tekspace.io/?p=1839 Dive into our comprehensive guide on seamlessly creating Docker registry credentials with kubectl. Whether you're a beginner or an experienced Kubernetes administrator, this article demystifies the process of securely managing Docker registry access. Learn the step-by-step method to generate and update your regcred secret for Docker registry authentication, ensuring your Kubernetes deployments can pull images without a hitch. Perfect for DevOps professionals and developers alike, this tutorial not only simplifies Kubernetes secrets management but also introduces best practices to maintain continuous access to private Docker images. Enhance your Kubernetes skills today and keep your containerized applications running smoothly.

The post How to create docker registry credentials using kubectl appeared first on TEKSpace Blog.

]]>
Updating a Docker registry secret (often named regcred in Kubernetes environments) with new credentials can be essential for workflows that need access to private registries for pulling images. This process involves creating a new secret with the updated credentials and then patching or updating the deployments or pods that use this secret.

Here’s a step-by-step guide to do it:

Step 1: Create a New Secret with Updated Credentials

  1. Log in to Docker Registry: Before updating the secret, ensure you’re logged into the Docker registry from your command line interface so that Kubernetes can access it.
  2. Create or Update the Secret: Use the kubectl create secret command to create a new secret or update an existing one with your Docker credentials. If you’re updating an existing secret, you might need to delete the old secret first. To create a new secret (or replace an existing one)
kubectl create secret docker-registry regcred \
  --docker-server=<YOUR_REGISTRY_SERVER> \ # The URL of your Docker registry
  --docker-username=<YOUR_USERNAME> \ # Your Docker registry username
  --docker-password=<YOUR_PASSWORD> \ # Your Docker registry password
  --docker-email=<YOUR_EMAIL> \ # Your Docker registry email
  --namespace=<NAMESPACE> \ # The Kubernetes namespace where the secret will be used
  --dry-run=client -o yaml | kubectl apply -f -

Replace <YOUR_REGISTRY_SERVER>, <YOUR_USERNAME>, <YOUR_PASSWORD>, <YOUR_EMAIL>, and <NAMESPACE> with your Docker registry details and the appropriate namespace. The --dry-run=client -o yaml | kubectl apply -f - part generates the secret definition and applies it to your cluster, effectively updating the secret if it already exists.

Step 2: Update Deployments or Pods to Use the New Secret

If you’ve created a new secret with a different name, you’ll need to update your deployment or pod specifications to reference the new secret name. This step is unnecessary if you’ve updated an existing secret.

  1. Edit Deployment or Pod Specification: Locate your deployment or pod definition files (YAML files) and update the imagePullSecrets section to reference the new secret name if it has changed.
  2. Apply the Changes: Use kubectl apply -f <deployment-or-pod-file>.yaml to apply the changes to your cluster.

Step 3: Verify the Update

Ensure that your deployments or pods can successfully pull images using the updated credentials.

  1. Check Pod Status: Use kubectl get pods to check the status of your pods. Ensure they are running and not stuck in a ImagePullBackOff or similar error status due to authentication issues.
  2. Check Logs: For further verification, check the logs of your pods or deployments to ensure there are no errors related to pulling images from the Docker registry. You can use kubectl logs <pod-name> to view logs.

This method ensures that your Kubernetes deployments can continue to pull images from private registries without interruption, using the updated credentials.

The post How to create docker registry credentials using kubectl appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-create-docker-registry-credentials-using-kubectl/feed/ 0
Learn about CP command in Linux https://blog.tekspace.io/learn-about-cp-command-in-linux/ https://blog.tekspace.io/learn-about-cp-command-in-linux/#respond Tue, 15 Aug 2023 18:49:33 +0000 https://blog.tekspace.io/?p=1106 The cp command in Linux is used to copy files and directories from one location to another. It stands for “copy” and is an essential tool for managing and duplicating files. The basic syntax of the cp command is as follows: Here, source represents the file or directory you want to copy, and destination is

The post Learn about CP command in Linux appeared first on TEKSpace Blog.

]]>
The cp command in Linux is used to copy files and directories from one location to another. It stands for “copy” and is an essential tool for managing and duplicating files. The basic syntax of the cp command is as follows:

cp [options] source destination

Here, source represents the file or directory you want to copy, and destination is where you want to copy it to. Here’s a detailed explanation of the command and its options:

Options:

  • -r or --recursive: This option is used when you want to copy directories and their contents recursively. Without this option, the cp command will not copy directories.
  • -i or --interactive: This option prompts you before overwriting an existing destination file. It’s useful to prevent accidental overwrites.
  • -u or --update: This option copies only when the source file is newer than the destination file or when the destination file is missing.
  • -v or --verbose: This option displays detailed information about the files being copied, showing each file as it’s copied.
  • -p or --preserve: This option preserves the original file attributes such as permissions, timestamps, and ownership when copying.
  • -d or --no-dereference: This option is used to avoid dereferencing symbolic links; it copies symbolic links themselves instead of their targets.
  • --parents: This option preserves the directory structure when copying files, creating any necessary parent directories in the destination.
  • --backup: This option makes a backup copy of each existing destination file before overwriting it.

Source and Destination:

  • source: This is the path to the file or directory you want to copy. It can be either an absolute path or a relative path.
  • destination: This is the path to the location where you want to copy the source. It can also be either an absolute path or a relative path. If the destination is a directory, the source file or directory will be copied into it.

Examples:

Copy a file to a different location:

Copy a directory and its contents recursively:

cp -r directory/ /path/to/destination/

Copy multiple files to a directory:

cp file1.txt file2.txt /path/to/destination/

Copy with preserving attributes and prompting for overwrite:

cp -i -p file.txt /path/to/destination/

Copy a file, preserving directory structure:

cp --parents file.txt /path/to/destination/

Copy and create backup copies of existing files:

cp --backup=numbered file.txt /path/to/destination/

Remember that improper use of the cp command can lead to data loss, so be cautious, especially when using options like -i and -u. Always double-check your commands before pressing Enter.

Additional Examples:

Copy a File to a Different Location:

cp file.txt /path/to/destination/

Copy Multiple Files to a Directory:

cp file1.txt file2.txt /path/to/destination/

Copy Files and Preserve Attributes:

cp -p file.txt /path/to/destination/

Copy a File and Prompt Before Overwriting:

cp -i file.txt /path/to/destination/

Copy Only Newer Files:

cp -u newer.txt /path/to/destination/

Copy Files Verbosely (Display Detailed Information):

cp -v file1.txt file2.txt /path/to/destination/

Copy Files and Create Backup Copies:

cp --backup=numbered file.txt /path/to/destination/

Copy Symbolic Links (Not Dereferencing):

cp -d symlink.txt /path/to/destination/

Copy a File and Preserve Parent Directories:

cp --parents dir/file.txt /path/to/destination/

Copy a Directory and Its Contents to a New Directory:

cp -r directory/ new_directory/

Copy Files Using Wildcards:

cp *.txt /path/to/destination/

Copy Hidden Files:

cp -r source_directory/. destination_directory/

Copy Files Between Remote Servers (using SCP):

cp source_file remote_username@remote_host:/path/to/destination/

Copy Files Between Remote Servers (using SSH and tar):

tar cf - source_directory/ | ssh remote_host "cd /path/to/destination/ && tar xf -"

Copy Files Using Absolute Paths:

cp /absolute/path/to/source/file.txt /absolute/path/to/destination/

Copy Files with Progress Indicator (using rsync):

rsync -av --progress source/ /path/to/destination/

These examples should provide you with a variety of scenarios where the cp command can be used to copy files and directories in Linux. Remember to adjust the paths and options according to your specific needs.

The post Learn about CP command in Linux appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/learn-about-cp-command-in-linux/feed/ 0
Managing MySQL database using PHPMyAdmin in Kubernetes https://blog.tekspace.io/managing-mysql-database-using-php/ https://blog.tekspace.io/managing-mysql-database-using-php/#respond Tue, 15 Sep 2020 01:35:49 +0000 https://blog.tekspace.io/index.php/2020/09/15/managing-mysql-database-using-php/ Make sure you are familiar with connecting to a Kubernetes cluster, have the Nginx ingress controller configured with a certificate manager, and have a MySQL database pod deployed. Only then, can you proceed. Follow this guide if you do not have MySQL deployed. Follow this guide to set up Nginx ingress and cert manager. PhpMyAdmin

The post Managing MySQL database using PHPMyAdmin in Kubernetes appeared first on TEKSpace Blog.

]]>

Make sure you are familiar with connecting to a Kubernetes cluster, have the Nginx ingress controller configured with a certificate manager, and have a MySQL database pod deployed. Only then, can you proceed. Follow this guide if you do not have MySQL deployed. Follow this guide to set up Nginx ingress and cert manager.

PhpMyAdmin is a popular open source tool to manage MySQL database server. Learn how to create a deployment and expose it as a service to access PhpMyAdmin from the internet using Nginx ingress controller.

  1. Create a deployment file called phpmyadmin-deployment.yaml and paste the following values:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: phpmyadmin-deployment
  labels:
    app: phpmyadmin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: phpmyadmin
  template:
    metadata:
      labels:
        app: phpmyadmin
    spec:
      containers:
        - name: phpmyadmin
          image: phpmyadmin/phpmyadmin
          ports:
            - containerPort: 80
          env:
            - name: PMA_HOST
              value: mysql-service
            - name: PMA_PORT
              value: "3306"
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secrets
                  key: ROOT_PASSWORD

NOTE: ROOT_PASSWORD value will be consumed from Kubernetes secrets. If you want to learn more about Kubernetes secrets. Follow this guide.

  1. Execute the below command to create a new deployment:
kubectl apply -f phpmyadmin-deployment.yaml

Output:

deployment.apps/phpmyadmin-deployment created

Exposing PhpMyAdmin via Services

  1. Create new file called phpmyadmin-service.yaml and paste the following values:
apiVersion: v1
kind: Service
metadata:
  name: phpmyadmin-service
spec:
  type: NodePort
  selector:
    app: phpmyadmin
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  1. Execute the below command to create service:
kubectl apply -f phpmyadmin-service.yaml

Output:

service/phpmyadmin-service created

Once you are done with the above configurations, it’s time to exposePhpMyAdminn service via internet.

I use DigitalOcean-managed Kubernetes. I manage my own DNS, and DigitalOcean automatically creates a load balancer for my Nginx ingress controller. Once again if you want to follow this guide, it will be very helpful.

Nginx Ingress configuration

  1. Create a new YAML file called phpmyadmin-ingress.yaml and paste the following values:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - phpmyadmin.example.com
    secretName: echo-tls
  rules:
  - host: mydemo.example.com
    http:
      paths:
      - backend:
          serviceName: phpmyadmin-service
          servicePort: 80
  1. Apply the changes:
kubectl apply -f mydemo-ingress.yaml

The post Managing MySQL database using PHPMyAdmin in Kubernetes appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/managing-mysql-database-using-php/feed/ 0
Setup Windows Node with Kubernetes 1.14 https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/ https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/#respond Thu, 04 Apr 2019 21:40:30 +0000 https://blog.tekspace.io/index.php/2019/04/04/setup-windows-node-with-kubernetes-1-14/ Kubernetes 1.14 now provides out of the box support for Windows worker nodes to run windows containers with in a Kubernetes cluster. This feature was in preview for a long time, and now it is production ready. This is a wonderful opportunity for most cloud giant companies to start applying a new version of Kubernetes

The post Setup Windows Node with Kubernetes 1.14 appeared first on TEKSpace Blog.

]]>
Kubernetes 1.14 now provides out of the box support for Windows worker nodes to run windows containers with in a Kubernetes cluster. This feature was in preview for a long time, and now it is production ready. This is a wonderful opportunity for most cloud giant companies to start applying a new version of Kubernetes 1.14 to their offering. So that they can get their customers to start migrating their applications that run on Windows virtualization platform to windows containers quicker.

NOTE: Azure, Google, IBM, AWS now offer Kubernetes services that offer free cluster management, so you do not have to worry any longer. Windows containers are yet to be offered and should be available soon. Check out their official websites to find out more information on when they will be able to offer windows containers.

In this tutorial, I will go over how to Setup windows node and join that to existing Kubernetes cluster with 1.14 version. If you do not have a Kubernetes cluster and would like to learn how to set it up. Check out the below prerequisites.

Prerequisites

NOTE: Before we move forward, ensure you have successfully setup a Kubernetes 1.14 cluster on your Linux machine. If not, check out the prerequisites.

Enable mixed OS scheduling

The below guide was referenced from Microsoft documentation.

Login to your master node and execute the below commands.

cd ~ && mkdir -p kube/yaml && cd kube/yaml

Confirm kube-proxy DaemonSet is set to Rolling Update:

kubectl get ds/kube-proxy -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' --namespace=kube-system

Download node-selector-patch from GitHub:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/node-selector-patch.yml

Patch your kube-proxy

kubectl patch ds/kube-proxy --patch "$(cat node-selector-patch.yml)" -n=kube-system

Check the status of kube-proxy

kubectl get ds -n kube-system
[rahil@k8s-master-node yaml]$ kubectl get ds -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                               AGE
kube-flannel-ds-amd64     2         2         2       0            2           beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux   106m
kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm                                 106m
kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64                               106m
kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le                             106m
kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x                               106m
kube-proxy                2         2         2       2            2           beta.kubernetes.io/os=linux                                 21h

Your kube-proxy node selector status should show beta.kubernetes.io/os=linux get applied.

Setting up flannel networking

Below guide was referenced from Microsoft documentation.

Since I already have kube-flannel setup from previous tutorial, I will go ahead and edit it by following the below guide and update the values accordingly.

On your master node, edit kube-flannel and apply changes that are needed to configure windows worker node.

kubectl edit cm -n kube-system kube-flannel-cfg

If you already know how to use the vi editor, you should be able to navigate with in the edit mode. Go ahead and find the below block of code and update it with the as shown below:

cni-conf.json: |
    {
      "name": "vxlan0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }

And, your net-conf.json should look like this shown below:

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "VNI" : 4096,
        "Port": 4789
      }
    }

Once you have updated your kube-flannel configmap, go ahead and save it to apply those changes.

Target your kube-flannel to only Linux by executing the below command:

kubectl patch ds/kube-flannel-ds-amd64 --patch "$(cat node-selector-patch.yml)" -n=kube-system

Install Docker on your Windows node

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name Docker -ProviderName DockerMsftProvider
Restart-Computer -Force

Download and stage Kubernetes packages

Step 1: Open PowerShell as an administrator and execute the below command to create a directory called k.

mkdir c:\k; cd c:\k

Step 2: Download Kubernetes 1.14.0 from github and download kubernetes-node-windows-amd64.tar.gz.

Step 3: Extract the package to c:\k path on your Windows node.

NOTE: You may have to use a third party tool to extract tar and gz files. I recommend using portable 7zip from here. So that you don’t have to install it.

Find kubeadm,kubectl, kubelet, and kube-proxy and copy it on windows node under c:\k\. Should look like below.

Copy Kubernetes certificate file from master node

Go to your master node under ~/.kube/config of your user home directory and paste it to c:\k\config.

You can use xcopy or winscp to download config file from master node to windows node.

Add paths to environment variables

Open PowerShell as an administrator and execute the following commands:

$env:Path += ";C:\k"; $env:KUBECONFIG="C:\k\config"; [Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\k", [EnvironmentVariableTarget]::Machine); [Environment]::SetEnvironmentVariable("KUBECONFIG", "C:\k\config", [EnvironmentVariableTarget]::User)

Reboot your system before moving forward.

Joining Windows Server node to Master node

To join the Flannel network, execute the below command to download the script.

Step 1: Open PowerShell as Administrator:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
 wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/start.ps1 -o c:\k\start.ps1

Step 2: Navigate to c:\k\

cd c:\k\

Step 3: Execute the below command to join Flannel cluster

.\start.ps1 -ManagementIP 192.168.0.123 -NetworkMode overlay -InterfaceName Ethernet -Verbose

Replace ManagementIP with your Windows node IP address. You can execute ipconfig to get these details. To understand the above command, please refer to this guide from Microsoft.

PS C:\k> .\kubectl.exe get nodes
NAME                STATUS     ROLES    AGE   VERSION
k8s-master-node     Ready      master   35h   v1.14.0
k8s-worker-node-1   Ready      <none>   35h   v1.14.0
win-uq3cdgb5r7g     Ready      <none>   11m   v1.14.0

Testing windows containers

If everything went well, and you see your Windows node joined the cluster successfully. You can deploy a Windows container to test if everything is working as expected. Execute the below commands to deploy a Windows container.

Download YAML file:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/simpleweb.yml -O win-webserver.yaml

Create new deployment:

kubectl apply -f .\win-webserver.yaml

Check the status of container:

kubectl get pods -o wide -w

Output

PS C:\k> .\kubectl.exe get pods -o wide -w
NAME                            READY   STATUS              RESTARTS   AGE   IP       NODE              NOMINATED NODE   READINESS GATES
win-webserver-cfcdfb59b-fkqxg   0/1     ContainerCreating   0          40s   <none>   win-uq3cdgb5r7g   <none>           <none>
win-webserver-cfcdfb59b-jbm7s   0/1     ContainerCreating   0          40s   <none>   win-uq3cdgb5r7g   <none>           <none>

Troubleshooting

If you are receiving something like below. That means your kubeletwin/pause wasn’t built correctly. After spending several hours. I dig through all the script that start.ps1 script does, and I found out that whenever Docker image was built, it didn’t use the correct version of container image.

Issue

Error response from daemon: CreateComputeSystem 229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2: The container operating system does not match the host operating system.
(extra info: {"SystemType":"Container","Name":"229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2","Owner":"docker","VolumePath":"\\\\?\\Volume{d03ade10-14ef-4486-aa63-406f2a7e5048}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\229d5b8cf2ca94c698153f3ffed826f4ff69bff98d12137529333a1f947423e2","Layers":[{"ID":"7cf9a822-5cb5-5380-98c3-99885c3639f8","Path":"C:\\ProgramData\\docker\\windowsfilter\\83e740543f7683c25c7880388dbe2885f32250e927ab0f2119efae9f68da5178"},{"ID":"600d6d6b-8810-5bf3-ad01-06d0ba1f97a4","Path":"C:\\ProgramData\\docker\\windowsfilter\\529e04c75d56f948819cd62e4886d865d8faac7470be295e7116ddf47ca15251"},{"ID":"f185c0c0-eccf-5ff9-b9fb-2939562b75c3","Path":"C:\\ProgramData\\docker\\windowsfilter\\7640b81c6fff930a838e97c6c793b4fa9360b6505718aa84573999aa41223e80"}],"HostName":"229d5b8cf2ca","HvPartition":false,"EndpointList":["CE799786-A781-41ED-8B1F-C91DFEDB75A9"],"AllowUnqualifiedDNSQuery":true}).

Solution

  1. Go to c:\k\ and open Dockerfile.
  2. Update first line to FROM mcr.microsoft.com/windows/nanoserver:1809 and save the file.
  3. Execute the below command to build an image as administrator from a PowerShell console.
cd c:\k\; docker build -t kubeletwin/pause .
  1. Open win-webserver.yaml and update image tag to image: mcr.microsoft.com/windows/servercore:1809.
  2. Delete and Re-apply your deployment by executing the below command.
kubectl delete win-webserver
kubectl apply -f .\win-webserver.yaml

Now all your pods should show in running state.

PS C:\k> kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
win-webserver-cfcdfb59b-gk6g9   1/1     Running   0          6m44s
win-webserver-cfcdfb59b-q4zxz   1/1     Running   0          6m44s

The post Setup Windows Node with Kubernetes 1.14 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/setup-windows-node-with-kubernetes-1-14/feed/ 0
Manage Docker Images locally and in remote Container Registry https://blog.tekspace.io/manage-docker-images-locally-and-in-remote-container-registry/ https://blog.tekspace.io/manage-docker-images-locally-and-in-remote-container-registry/#respond Sun, 31 Mar 2019 21:02:32 +0000 https://blog.tekspace.io/index.php/2019/03/31/manage-docker-images-locally-and-in-remote-container-registry/ Managing Docker images is very important. Just as similar to managing application source code in a version controlled repository such as GIT. Docker also provides similar capabilities. Docker images can be managed locally on your development machine and also on remote container registry also known as Docker hub. In this tutorial, I will demonstrate a

The post Manage Docker Images locally and in remote Container Registry appeared first on TEKSpace Blog.

]]>
Managing Docker images is very important. Just as similar to managing application source code in a version controlled repository such as GIT. Docker also provides similar capabilities. Docker images can be managed locally on your development machine and also on remote container registry also known as Docker hub.

In this tutorial, I will demonstrate a set of commands on how to manage Docker images both locally and remotely.

Prerequisite

Managing images locally

1. List Docker Images

To view Docker images locally, you can type Docker images, and it will list all the images in the console. Execute the below command from an elevated PowerShell or command line tool to see the output.

docker images

Output

REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
demo/webappcore                        2.2.0               9729270fe1ac        38 hours ago        401MB
<none>                                 <none>              2822fcdec81d        38 hours ago        403MB
<none>                                 <none>              807f7b4b42c1        38 hours ago        398MB
<none>                                 <none>              659fbabfde96        38 hours ago        398MB
<none>                                 <none>              ad0df2c81cf1        38 hours ago        397MB
<none>                                 <none>              97a33d1a133d        38 hours ago        395MB
mcr.microsoft.com/dotnet/core/aspnet   2.2                 36e5a01ef28f        3 days ago          395MB
hello-world                            nanoserver          7dddd19ddc59        2 months ago        333MB
nanoserver/iis                         latest              7eac2eab1a5c        9 months ago        1.29GB

NOTE: I am using Windows 10 to demonstrate Docker image management.

Next, we will limit the list and only print images by repository name. The command is Docker images [repository name].

docker images hello-world

Output

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         nanoserver          7dddd19ddc59        2 months ago        333MB

You can also add an addition filter to the above command by providing tag. The command is docker images [repository name]:[tag].

docker images hello-world:nanaserver

2. Deleting Docker images

Sometime you may want to just delete an image that you no longer need. By deleting the image, it will permanently delete an image from your local. The command is docker rmi [image name]:[tag].

NOTE: There are many ways to delete an image. You can also reference Docker official website.

docker rmi hello-world:nanoserver

NOTE: If you see an error like this “Error response from daemon: conflict: unable to remove repository reference "hello-world:nanoserver" (must force) - container 3db17e64fadb is using its referenced image 7dddd19ddc59“. It means there is a container running locally on your machine, and it has been referenced.

To remove it forcefully, you can apply -f at the end of the above command. It will stop the running container and delete the image.

docker rmi hello-world:nanoserver -f

You should see almost similar output shown below.

PS C:\> docker rmi hello-world:nanoserver -f
Untagged: hello-world:nanoserver
Untagged: hello-world@sha256:ea56d430e69850b80cd4969b2cbb891db83890c7bb79f29ae81f3d0b47a58dd9
Deleted: sha256:7dddd19ddc595d0cbdfb0ae0a61e1a4dcf8f35eb4801957a116ff460378850da

Now, let’s go ahead and execute docker images. You will see your image was successfully deleted.

PS C:\> docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
demo/webappcore                        2.2.0               9729270fe1ac        38 hours ago        401MB
<none>                                 <none>              2822fcdec81d        38 hours ago        403MB
<none>                                 <none>              807f7b4b42c1        39 hours ago        398MB
<none>                                 <none>              659fbabfde96        39 hours ago        398MB
<none>                                 <none>              ad0df2c81cf1        39 hours ago        397MB
<none>                                 <none>              97a33d1a133d        39 hours ago        395MB
mcr.microsoft.com/dotnet/core/aspnet   2.2                 36e5a01ef28f        3 days ago          395MB
nanoserver/iis                         latest              7eac2eab1a5c        9 months ago        1.29GB

Managing Docker images on remote container registry

In this tutorial, we will use Docker hub, which gives you one free private repository and unlimited public repositories. In this tutorial, I will demonstrate how to use a private container registry that is only accessible to you or your organization.

Step 1: First, we need to register to get access to Docker hub. Once you have successfully registered and signed in. We are now ready to create our first private repository for Docker images.

Step 2: In the below example, I will show you how to push demo/webappcore image to remote repository. Execute the below command to push Docker image to Docker hub.

Login to Docker hub.

docker login --username tekspacedemo

Then it will prompt you for password. Go ahead and type the password.

NOTE: Replace tekspacedemo with your registered docker id.

Before we move on to push the local image to remote registory. We need to tag it to remote path so that we can push that to docker hub.

docker tag demo/webappcore:2.2.0 tekspacedemo/demo:2.2.0

The above command will tag local Docker image path demo/webappcore:2.2.0 to the path that matches for remote repository path, which in my case is tekspacedemo/demo:2.2.0.

Now we will push a local image from tekspacedemo repository.

docker push tekspacedemo/demo:2.2.0

NOTE: for the demo purpose, I created a remote repo name called demo. You can use any name to define repository name.

After your image is pushed to remote container registry. Your output should look like below

PS C:\> docker push tekspacedemo/demo:2.2.0
The push refers to repository [docker.io/tekspacedemo/demo]
a034775f3ab9: Pushed
7e886042ad70: Pushed
6c8276f92903: Pushed
596811bf044f: Pushed
84ff941997ac: Pushed
673fa658bebd: Pushed
75932c99c074: Pushed
3d57d631c3a7: Pushed
63077ec902e9: Pushed
f762a63f047a: Skipped foreign layer
2.2.0: digest: sha256:161bdf178437534bda10551406944e1292b71fa40075f00a29851f6fd7d7d020 size: 2505

That’s it! You have successfully pushed your first local repo to a remote container repository. Thank you for following this tutorial. Please comment below and feel free to share your feedback.

The post Manage Docker Images locally and in remote Container Registry appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/manage-docker-images-locally-and-in-remote-container-registry/feed/ 0
Container 101 https://blog.tekspace.io/container-101/ https://blog.tekspace.io/container-101/#respond Sat, 30 Mar 2019 01:55:09 +0000 https://blog.tekspace.io/index.php/2019/03/30/container-101/ What is a Container? A Container is a stripped-down lightweight image of an operating system, which is then bundled with the application package and other dependencies to run a Container in an isolated process. A container shares the core components such as kernel, network drivers, system tools, libraries, and many other settings from the host

The post Container 101 appeared first on TEKSpace Blog.

]]>
What is a Container?

A Container is a stripped-down lightweight image of an operating system, which is then bundled with the application package and other dependencies to run a Container in an isolated process. A container shares the core components such as kernel, network drivers, system tools, libraries, and many other settings from the host operating system. The purpose of a Container is to have a standard image that is packaged, published, and deployed to a cluster such as Docker swarm or Kubernetes. To learn more about containers history, please visit the official Docker website.

What is a Docker image?

A Docker image contains stripped down operating system image, your application code, and configuration required to run in any Container cluster such as Kubernetes or Docker swarm. An image is built using a Dockerfile that is setup using a set of commands to compile with all the dependencies. To learn more on how to write a Docker file. Please visit this website.

Follow the below guide to get started with Containers

Beginners guide

Follow the below guide in order to get the most out of it.

Intermediate guide

Advanced guide

Thank you for following this tutorial. Please comment and subscribe to receive new tutorials in email.

The post Container 101 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/container-101/feed/ 0
Building your first docker image for Windows https://blog.tekspace.io/how-to-create-docker-image-for-windows-containers/ https://blog.tekspace.io/how-to-create-docker-image-for-windows-containers/#respond Sat, 30 Mar 2019 01:37:45 +0000 https://blog.tekspace.io/index.php/2019/03/30/how-to-create-docker-image-for-windows-containers/ In this tutorial, I will demonstrate how to host an ASP.NET Core 2.2 application on Windows Containers by using a Docker image. A Docker image will be packaged with an ASP.NET Core application that will be run when a container is spun up.Before we get started with creating a Docker image. Let’s make sure we

The post Building your first docker image for Windows appeared first on TEKSpace Blog.

]]>
In this tutorial, I will demonstrate how to host an ASP.NET Core 2.2 application on Windows Containers by using a Docker image. A Docker image will be packaged with an ASP.NET Core application that will be run when a container is spun up.
Before we get started with creating a Docker image. Let’s make sure we have prerequisites done.

Prerequisites

Once you have the prerequisites, we will use a publicly available ASP.NET Core base image from Microsoft. Microsoft maintains their Docker images on Docker hub. Docker hub is a container registry to manage your Docker images either by exposing the image publicly or maintaining it privately. Private image responsibilities cost money. Visit Docker Hub website to learn more about image repository management.

Building your first Docker Image

Step 1: Open the PowerShell console as an administrator

Step 2: Let’s get started by pulling ASP.NET Core 2.2 Docker image from Docker hub by executing the below command.

docker pull mcr.microsoft.com/dotnet/core/aspnet:2.2

Your output should look similar to what is shown below:

Step 3: Create a folder with your preference name whatever you prefer. I will use c:\docker\ for demonstration purposes.

mkdir c:\docker

Step 4: Download ASP.NET Core application package from this URL.

Invoke-WebRequest -UseBasicParsing -OutFile c:\docker\WebAppCore2.2.zip https://github.com/rahilmaknojia/WebAppCore2.2/archive/master.zip

What we are doing in the above command is downloading packaged code that is already built to save time on building a package.

Step 5: Extract WebAppCore2.2.zip by using the PowerShell 5.0 native command. If you do not have PowerShell 5.0 and above, you will have to manually extract the package.

Expand-Archive c:\docker\WebAppCore2.2.zip -DestinationPath c:\docker\ -Force 

Step 6: Now let’s create a Docker file in c:\docker folder.

New-Item -Path C:\docker\Dockerfile -ItemType File

Step 7: Go ahead and open C:\docker folder path in Visual Studio Code.

Step 8: Now we will open Dockerfile by double-clicking on the file in Visual Studio Code to start writing the required steps to build an image.

Copy and paste the code below into Dockerfile.

# Pull base image from Docker hub 
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2

# Create working directory
RUN mkdir C:\\app

# Set a working directory
WORKDIR c:\\app

# Copy package from your machine to the image. Also known as staging a package
COPY WebAppCore2.2-master/Package/* c:/app/

# Run the application
ENTRYPOINT ["dotnet", "WebAppCore2.2.dll"]

What we told the Dockerfile is to pull an asp.net core base image from Docker hub. Then we ran a command to create a directory called app in c:\app path. We also told the container to set c:\app as a working directory. That way we can access binary directly when the container is spun up. We also added a step to copy all the binaries from c:\docker\WebAppCore2.2-master\Package\ to destination path in container c:\app. Once we had the package staged in the container, we told it to run the application by executing dotnet WebAppCore2.2.dll so that the app would be accessible from outside the container. To learn more about Dockerfile for Windows, check out this Microsoft documentation.

Now that you have the required steps to build an image, let’s go ahead with the below steps.

Step 9: Navigate to Dockerfile working directory from PowerShell console. If you are already in that path, you can ignore it.

cd c:\docker

Step 10: Execute the below command to build a container image.

docker build -t demo/webappcore:2.2.0

The above command will create a Docker image under demo path. With the image name called as webappcore and version 2.2.0.

Your output should look like below once it is successful:

PS C:\docker> docker build -t demo/webappcore:2.2.0 .
Sending build context to Docker daemon  9.853MB
Step 1/5 : FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
 ---> 36e5a01ef28f
Step 2/5 : RUN mkdir C:\\app
 ---> Using cache
 ---> 8f88e30dcdd0
Step 3/5 : WORKDIR c:\\app
 ---> Using cache
 ---> 829e48e68bda
Step 4/5 : COPY WebAppCore2.2-master/Package/* c:/app/
 ---> Using cache
 ---> 6bfd9ae4b731
Step 5/5 : ENTRYPOINT ["dotnet", "WebAppCore2.2.dll"]
 ---> Running in 4b5488d5ea5f
Removing intermediate container 4b5488d5ea5f
 ---> 9729270fe1ac
Successfully built 9729270fe1ac
Successfully tagged demo/webappcore:2.2.0

Step 11: Once the image has been built, you are now ready to run the container. Execute the below command.

docker run --name webappcore --rm -it -p 8000:80 demo/webappcore:2.2.0

The above command will create a new container called webappcore with parameters.

  • --rm is used to automatically remove the container after it is shutdown.
  • -it will open a session into your container and output all the logs.
  • -p is used for creating an external port and assigning it to the internal port of a container. Port 8000 is exposed to outside containers, and port 80 is used to access the app within the container.
  • demo/webappcore:2.2.0 is the path to the Docker image to run as a container.

Output of a running container

Step 12: Browsing your application from your local machine localhost:8000.

This is it! You ran your first Docker container in your local environment. Thank you for following the tutorial. Please comment below for any issue or feedback you would like to share.

The post Building your first docker image for Windows appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/how-to-create-docker-image-for-windows-containers/feed/ 0
Run your first Windows Container on your Windows 10 https://blog.tekspace.io/run-your-first-windows-container-on-your-windows-10/ https://blog.tekspace.io/run-your-first-windows-container-on-your-windows-10/#respond Sun, 24 Mar 2019 15:59:32 +0000 https://blog.tekspace.io/index.php/2019/03/24/run-your-first-windows-container-on-your-windows-10/ Windows 10 now comes with container features available to pro and enterprise versions. To get started with containers on Windows 10, please make sure the below prerequisites are met. Pre-requisites Let’s ensure we have prerequisites installed before we get started with docker cli and container installation. If you already have the below items installed, you

The post Run your first Windows Container on your Windows 10 appeared first on TEKSpace Blog.

]]>
Windows 10 now comes with container features available to pro and enterprise versions. To get started with containers on Windows 10, please make sure the below prerequisites are met.

Pre-requisites

Let’s ensure we have prerequisites installed before we get started with docker cli and container installation. If you already have the below items installed, you can skip them and proceed with the setup.

Windows 10 now comes with container feature available for developers and devops engineers to start using Docker containers in their local environment. To enable containers for Windows 10, execute the below command.

Enable-WindowsOptionalFeature -FeatureName containers -Online -all

NOTE: Upon installation, you will be prompted to reboot your system after the container feature is enabled. It is recommended that you select yes to reboot your system.

Install Docker CLI

Now we will go ahead and download latest docker cli by using the Chocolate package management tool.

Once you have choco installed, go ahead and open PowerShell as an administrator and execute the below command.

choco install docker

You will be asked to say yes or no. Go ahead and continue with the interactive installation process by pressing Y. The output should look like below if the installation was successful.

Now that you have docker cli installed, you are now ready to run your first Docker container.

Installing Docker Enterprise Edition (EE)

To install Docker EE on Windows 10, please make sure above setup is successfully completed. To get started, go ahead and execute the below commands from an elevated PowerShell console.

To go to the Downloads’ folder of your current user.

cd ~\Downloads

Download Docker Enterprise Edition from online

Invoke-WebRequest -UseBasicParsing -OutFile docker-18.09.3.zip https://download.docker.com/components/engine/windows-server/18.09/docker-18.09.3.zip

NOTE: In this tutorial I am using Docker 18.09.3 version. This may change in the future. You can follow the updated document from here.

Unzip the Docker package.

Expand-Archive docker-18.09.3.zip -DestinationPath $Env:ProgramFiles -Force

Execute the below script to set up and start Docker.

# Add Docker to the path for the current session.
$env:path += ";$env:ProgramFiles\docker"

# Optionally, modify PATH to persist across sessions.
$newPath = "$env:ProgramFiles\docker;" +
[Environment]::GetEnvironmentVariable("PATH",
[EnvironmentVariableTarget]::Machine)

[Environment]::SetEnvironmentVariable("PATH", $newPath,
[EnvironmentVariableTarget]::Machine)

# Register the Docker daemon as a service.
dockerd --register-service

# Start the Docker service.
Start-Service docker

Test your Docker setup by executing the below command.

docker container run hello-world:nanoserver

Running your first Docker container

In this example, I will be using nanoserver image from Docker hub to run an IIS application.

Step 1: Let’s first check if we have any Docker images pulled from Docker hub. Based on the above setup for Docker, you should have a hello-world Docker image pulled from Docker hub.

docker images

Step 2: Let’s pull a new Docker image from Docker hub to run nanoserver with IIS configured.

docker pull nanoserver/iis

Your final output should look like below.

docker-iis-nanoserver-for-windows-container

Step 3: After we have pulled the latest image from Docker hub, let’s run our first windows container by executing the below command.

docker run --name nanoiis -d -it -p 80:80 nanoserver/iis

After it will return a container ID that you can use to check container status, configuration, etc.

Step 4: Check our first container status by executing the below command.

docker ps -a -f status=running

Status output:

docker-iis-nanoserver-for-windows-container-status

Step 5: Now let’s get the IP address of our container to access it from the browser.

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" nanoiis

Step 6: Copy the IP address that was returned to the PowerShell console and browse it in Internet Explorer.

In my case, I received 172.19.231.54. Yours may be different.

This is it! You have run your first Windows container on your Windows 10 machine. Thank you for following this tutorial.

The post Run your first Windows Container on your Windows 10 appeared first on TEKSpace Blog.

]]>
https://blog.tekspace.io/run-your-first-windows-container-on-your-windows-10/feed/ 0