Saturday, May 2, 2020

Install a local Kubernetes with MicroK8s on Ubuntu 18.04

About MicroK8S:

MicroK8s is a CNCF certified upstream Kubernetes deployment that runs entirely on workstation or edge device. Being a snap it runs all Kubernetes services natively (i.e. no virtual machines) while packing the entire set of libraries and binaries needed. Installation is limited by how fast you can download a couple of hundred megabytes and the removal of MicroK8s leaves nothing behind.


1. Requirement: 

One Linux machine with Ubuntu Installed.

2. Deployment of Microk8s:


sudo snap install microk8s --classic

MicroK8s is a snap and as such it is frequently updated to each release of Kubernetes. To follow a specific upstream release series it’s possible to select a channel during installation. For example, to follow the v1.17 series:


sudo snap install microk8s --classic --channel=1.17/stable

  snap info microk8s to see what versions are currently published.


name:      microk8s
summary:   Kubernetes for workstations and appliances
publisher: Canonical*
store-url: https://snapcraft.io/microk8s
contact:   https://github.com/ubuntu/microk8s
license:   unset
description: |
  MicroK8s is a small, fast, secure, single node Kubernetes that installs on
  just about any Linux box. Use it for offline development, prototyping,
  testing, or use it on a VM as a small, cheap, reliable k8s for CI/CD. It's
  also a great k8s for appliances - develop your IoT apps for k8s and deploy
  them to MicroK8s on your boxes.
commands:
  - microk8s.add-node
  - microk8s.cilium
  - microk8s.config
  - microk8s.ctr
  - microk8s.disable
  - microk8s.enable
  - microk8s.helm
  - microk8s.inspect
  - microk8s.istioctl
  - microk8s.join
  - microk8s.juju
  - microk8s.kubectl
  - microk8s.leave
  - microk8s.linkerd
  - microk8s
  - microk8s.remove-node
  - microk8s.reset
  - microk8s.start
  - microk8s.status
  - microk8s.stop
services:
  microk8s.daemon-apiserver:          simple, disabled, inactive
  microk8s.daemon-apiserver-kicker:   simple, disabled, inactive
  microk8s.daemon-cluster-agent:      simple, disabled, inactive
  microk8s.daemon-containerd:         simple, disabled, inactive
  microk8s.daemon-controller-manager: simple, disabled, inactive
  microk8s.daemon-etcd:               simple, disabled, inactive
  microk8s.daemon-flanneld:           simple, disabled, inactive
  microk8s.daemon-kubelet:            simple, disabled, inactive

  microk8s.daemon-proxy:              simple, disabled, inactive

  microk8s.daemon-scheduler:          simple, disabled, inactive

snap-id:      EaXqgt1lyCaxKaQCU349mlodBkDCXRcg

tracking:     1.17/stable
refresh-date: today at 09:46 IST
channels:
  latest/stable:    v1.18.2         2020-04-27 (1378) 201MB classic
  latest/candidate: v1.18.2         2020-04-30 (1383) 201MB classic
  latest/beta:      v1.18.2         2020-04-30 (1383) 201MB classic
  latest/edge:      v1.18.2         2020-05-01 (1391) 211MB classic
  dqlite/stable:    --
  dqlite/candidate: --
  dqlite/beta:      --
  dqlite/edge:      v1.16.2         2019-11-07 (1038) 189MB classic
  1.19/stable:      --
  1.19/candidate:   --
  1.19/beta:        --
  1.19/edge:        v1.19.0-alpha.1 2020-03-26 (1311) 201MB classic
  1.18/stable:      v1.18.2         2020-04-27 (1379) 201MB classic
  1.18/candidate:   v1.18.2         2020-04-27 (1379) 201MB classic
  1.18/beta:        v1.18.2         2020-04-27 (1379) 201MB classic
  1.18/edge:        v1.18.2         2020-04-29 (1387) 201MB classic
  1.17/stable:      v1.17.5         2020-05-02 (1355) 179MB classic
  1.17/candidate:   v1.17.5         2020-04-17 (1355) 179MB classic
  1.17/beta:        v1.17.5         2020-04-17 (1355) 179MB classic
  1.17/edge:        v1.17.5         2020-04-29 (1388) 179MB classic
  1.16/stable:      v1.16.8         2020-03-27 (1302) 179MB classic
  1.16/candidate:   v1.16.8         2020-03-27 (1302) 179MB classic

3. Configure your firewall to allow pod-to-pod and pod-to-internet communication:

sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
4. Enable Add-ons:


By default we get a barebones upstream Kubernetes. Additional services, such as dashboard or kube-dns, can be enabled by running the microk8s.enable command:
microk8s.enable dashboard dns
These addons can be disabled at anytime by running the microk8s.disable command:
microk8s.disable dashboard dns
With microk8s.status you can see the list of available addons and the ones currently enabled.





List of the most important addons

  • dns: Deploy DNS. This addon may be required by others, thus we recommend you always enable it.
  • dashboard: Deploy kubernetes dashboard as well as grafana and influxdb.
  • storage: Create a default storage class. This storage class makes use of the hostpath-provisioner pointing to a directory on the host.
  • ingress: Create an ingress controller.
  • gpu: Expose GPU(s) to MicroK8s by enabling the nvidia-docker runtime and nvidia-device-plugin-daemonset. Requires NVIDIA drivers to be already installed on the host system.
  • istio: Deploy the core Istio services. You can use the microk8s.istioctl command to manage your deployments.
  • registry: Deploy a docker private registry and expose it on localhost:32000. The storage addon will be enabled as part of this addon.
4. Accessing the Kubernetes and Grafana dashboards:

microk8s.kubectl get all --all-namespaces


5. Access Kubernetes and Grafana dashboard:

Refer above snapshot. To access dashboard open in your browser to https://10.152.183.46:443 and you will see the kubernetes dashboard UI. To access the dashboard use the default token retrieved with:


token=$(microk8s.kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
microk8s.kubectl -n kube-system describe secret $token
To access Grafana dashboard:

microk8s.kubectl cluster-info








We need to point our browser to https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy and use the username and password shown with microk8s.config.



6. Host your first service in Kubernetes



We start by creating a microbot deployment with two pods via the kubectl cli:
microk8s.kubectl create deployment microbot --image=dontrebootme/microbot:v1
microk8s.kubectl scale deployment microbot --replicas=2
To expose our deployment we need to create a service:
microk8s.kubectl expose deployment microbot --type=NodePort --port=80 --name=microbot-service

microk8s.kubectl get all --all-namespaces
you will see the service "microbot-service" which is accessible on port "80" of Node IP.


7. Useful additional commands:



There are many commands that ship with MicroK8s. We’ve only seen the essential ones in this tutorial. Explore the others at your own convenience:
  • microk8s.status: Provides an overview of the MicroK8s state (running / not running) as well as the set of enabled addons
  • microk8s.enable: Enables an addon
  • microk8s.disable: Disables an addon
  • microk8s.kubectl: Interact with kubernetes
  • microk8s.config: Shows the kubernetes config file
  • microk8s.istioctl: Interact with the istio services; needs the istio addon to be enabled
  • microk8s.inspect: Performs a quick inspection of the MicroK8s intallation
  • microk8s.reset: Resets the infrastructure to a clean state
  • microk8s.stop: Stops all kubernetes services
  • microk8s.start: Starts MicroK8s after it is being stopped
8.  Finally, once you have enough dirty hand with MicroK8s, don't forget to stop cluster with "microk8s.stop"

9. Build-in inspection tool:

sudo microk8s inspect
10. Checking Pods logs:

microk8s kubectl get pods
microk8s kubectl logs 
# All images running in a cluster
kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'

 # All images excluding "k8s.gcr.io/coredns:1.6.2"
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'

# All fields under metadata regardless of name
kubectl get pods -A -o=custom-columns='DATA:metadata.*'

Monday, February 3, 2020

Placement Group for AWS EC2

Placement Group for AWS EC2


Placement Groups Overview

  • Placement group determines how instances are placed on underlying hardware
  • AWS now provides three types of placement groups
    1. Cluster – clusters instances into a low-latency group in a single AZ
    2. Partition – spreads instances across logical partitions, ensuring that instances in one partition do not share underlying hardware with instances in other partitions
    3. Spread – spreads instances across underlying hardware
Cluster Placement Group
  • is a logical grouping of instances within a single Availability Zone
  • don’t span across Availability Zones
  • recommended for applications that benefits from low network latency, high network throughput, or both.
  • To provide the lowest latency, and the highest packet-per-second network performance for the placement group, choose an instance type that supports enhanced networking
  • recommended to launch all group instances at the same time to ensure enough capacity
  • instances can be added later, but there are chances of encountering an insufficient capacity error
  • for moving an instance into the placement group,
    • create an AMI from the existing instance,
    • and then launch a new instance from the AMI into a placement group.
  • stopping and starting an instance within the placement group, the instance still runs in the same placement group
  • in case of an capacity error, stop and start all of the instances in the placement group, and try the launch again. Restarting the instances may migrate them to hardware that has capacity for all requested instances
  • is only available within a single AZ either in the same VPC or peered VPCs
  • is more of an hint to AWS that the instances need to be launched physically close to each together
  • enables applications to participate in a low-latency, 10 Gbps network.
AWS EC2 Placement Group

Partition Placement Groups

  • is a group of instances spread across partitions. Partitions are logical groupings of instances, where contained instances do not share the same underlying hardware across different partitions.
  • can be used to spread deployment of large distributed and replicated workloads, such as HDFS, HBase, and Cassandra, across distinct hardware to reduce the likelihood of correlated failures
  • can have a maximum of seven partitions per Availability Zone
  • can span multiple Availability Zones in the same Region.

Spread Placement Groups

  • is a group of instances that are each placed on distinct underlying hardware
  • recommended for applications that have a small number of critical instances that should be kept separate from each other.
  • reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware.
  • provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time.
  • can span multiple AZs, and can have a maximum of seven running instances per AZ per group.
  • If the start or launch an instance in a spread placement group fails cause of insufficient unique hardware to fulfill the request, the request can be tried later as EC2 makes more distinct hardware available over time

Placement Group Rules and Limitations

  • Ensure unique Placement group name within AWS account for the region
  • Placement groups cannot be merged
  • An instance can be launched in one placement group at a time; it cannot span multiple placement groups.
  • Instances with a tenancy of host cannot be launched in placement groups.
  • Cluster Placement group
    • can’t span multiple Availability Zones.
    • supported by Specific Instance types (General Purpose, GPU, Compute, Memory, Storage Optimized – c4.8xlarge, c3.8xlarge, g2.8xlarge, i2.8xlarge, r3.8xlarge, m4.10xlarge, d2.8xlarge) which support 10 Gigabyte network
    • maximum network throughput speed of traffic between two instances in a cluster placement group is limited by the slower of the two instances, so choose the instance type properly.
    • can use up to 10 Gbps for single-flow traffic.
    • Traffic to and from S3 buckets within the same region over the public IP address space or through a VPC endpoint can use all available instance aggregate bandwidth.
    • recommended to use the same instance type i.e. homogenous instance types. Although multiple instance types can be launched into a cluster placement group. However, this reduces the likelihood that the required capacity will be available for your launch to succeed
    • Network traffic to the internet and over an AWS Direct Connect connection to on-premises resources is limited to 5 Gbps.
  • Partition placement groups
    • supports a maximum of seven partitions per Availability Zone
    • with Dedicated Instances can have a maximum of two partitions
    • are not supported for Dedicated Hosts
    • are currently only available through the API or AWS CLI.
  • Spread placement groups
    • supports a maximum of seven running instances per Availability Zone for e.g., in a region that has three AZs, then a total of 21 running instances in the group (seven per zone).
    • are not supported for Dedicated Instances or Dedicated Hosts.