K8s

[K8s] Command cheatsheet

  • View config
kubectl config view
  • Show cluster info
kubectl cluster-info

# Example output
Kubernetes control plane is running at https://10.0.0.9:8443
KubeDNS is running at https://10.0.0.9:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  • Get component info
kubectl get xxx -o wide -A
kubectl describe xxx

# Use -l $LABEL to filter using labels
# USe --show-labels to show labels

# xxx can be nodes, pods, deployments,
# events, services, replicaset, etc.
# To specify a specific object: xxx/name. For example: deloy/my-deploy

# For full list and the short names, do 
kubectl api-resources
  • Create/Edit/Delete any object
# Create
kubectl create -f config.yaml

# Edit. Can be used to rollout a new version when editing deployments
kubectl edit xxx/$NAME

# Delete
kubectl delete xxx/$NAME
kubectl delete -f config.yaml
  • Create a deployment which manages a Pod
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
  • Expose a deployment as a service
# 8080 here is the container port
# NodePort exposes the ports on each node running related pods
kubectl expose deployment/hello-node --type="NodePort" --port 8080
  • Scale a deployment
kubectl scale deployments/kubernetes-bootcamp --replicas=4
  • Update a deployment
kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2 --record=true

# Check the rollout status
kubectl rollout status deployments/kubernetes-bootcamp

# Undo the rollout
# This rollback to previous replicaset
kubectl rollout undo deployments/kubernetes-bootcamp
  • Undo a rollout
# Find the replicaset you want to restore
# (default 10 history replicasets are saved)
kubectl get rs -o wide
kubectl rollout history $DEPLOY

# Get the revision number of the desired replicaset
kubectl describe rs kubernetes-bootcamp-fb5c67579

# Rollback to the specific replicaset
kubectl rollout undo deployments/kubernetes-bootcamp --to-revision=2
  • Logging
kubectl logs $POD_NAME
  • Execute a command on a container in a Pod
kubectl exec $POD_NAME -- $COMMAND


# Start a bash session. If no container specified, kubectl.kubernetes.io/default-container will be used
kubectl exec -it $POD_NAME -c $CONTAINER_NAME -- bash
  • Access a Pod’s port locally w/o services (for debugging)
# Method 1. Port-forwarding
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
# Then you can access the endpoint at 
# localhost:HOST_PORT

# Method 2. Proxy
kubectl proxy
# Then you can access the endpoint at
# http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:$POD_PORT/proxy/
  • Labeling
# Attach a new label
kubectl label pods $POD_NAME version=v1

# View labels of a Pod
kubectl describe pods $POD_NAME
  • Secrets
    Ways to use secrets:
    • As container envs (secretKeyRef)
    • As volumes mounted to containers
    • Save to a docker image and provide access inside the cluster
# Create a secret
kubectl create secret generic $SECRET [--from-file file_name] [--from-literal]

# Show a secret
kubectl get secret/xxx -o json
  • ConfigMap
kubectl create configmap $CONFIGMAP [--from-file] [--from-literal]
  • Drain a Node
# Drain a Node to move all Pods to other Nodes
kubectl drain $NODE

# Undo draining
kubectl uncordon $NODE to revert
  • Clean up
kubectl delete service hello-node
kubectl delete deployment hello-node
minikube stop
minikube delete

[K8s] Basic concept notes

Main components

Source
  • Each Node (machine) includes a kubelet and many Pods
  • Kubelets communicate with the Control Plane (actually a special node) and the corresponding Pods inside the Node
  • Control Plane can be split into multiple machines
  • Each Pod serves as an application and has many containers which are tightly coupled
  • Containers inside the same Pod share the same networking resource
  • kube-apiserver: Exposes the Kubernetes API
  • etcd: Key value store
  • kube-scheduler: Schedule Nodes to run Pods
  • kube-controller-manager: Many components deal with monitoring, task running, and account access control

History of container runtime

Source
  • From the oldest method (top) to newest approach (bottom)
  • CRI is an interface designed for kubelet to manipulate containers
  • Any CRI-compliant container runtime can do the job: Docker Engine, containerd, CRI-O, …
  • OCI runtime: runC, runhcs, …

K8s entity concepts

Source
  • A Deployment checks on the health of your Pod and restarts the Pod’s container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods
  • Deployment configuration file works very similar to docker-compose.yml
  • Deployment controller automatically search for a suitable node where an instance of the application could be run
  • If the Node hosting an instance goes down or is deleted, the Deployment controller replaces the instance with an instance on another Node in the cluster
  • A Service is an exposed Deployment, which encapsulates a set of Pods called ReplicaSet
  • Secrets and ConfigMap are extra resources we can mount into containers
  • Volumes types:
    • emptyDir: Storage lives with a Pod
    • hostPath: Storage lives with a Node
    • NFS and other cloud provider type
  • Storage
    • Storage Classes: Templates to create volumes
    • Persistent Volume Claims: Create (or use existing) PV by specifying SC, size, read/write type etc. We can then attach a PVC into a Pod config
    • Persistent Volumes: Either created manually (static) or automatically (dynamic) when no any PV match the claim
  • Resource
    • Request: minimum requirement
    • Quota: maximum limit

Common tools

  • cri-o: A container runtime
  • kubelet: Starts pods and containers
  • kubectl: The command line util to talk to your cluster.
  • kubeadm: The command to bootstrap the cluster (control planes etc.)
  • minikube: A tool to quickly setup a single-node cluster inside a VM
  • Kops: Setup AWS clusters easily

Networking

  • Each Node has its own node IP (as normal machines) which is mainly accessed by the control plane node
  • Each Pod has its own IP, where the Node will forward the request to
  • Pods inside each Node, or across Nodes, can see each other using Pod IPs, but not recommended to do so. Exposing the endpoint as a service is preferred
  • Containers inside the same Pod can refer to each other using localhost:port, this is like running up docker-compose inside a “pod machine”, but the container network is configured as host
  • Services have Cluster IPs and domain name, for intra-cluster access
  • Port definitions:
    • port: For intra-cluster access
    • nodePort: Exposed port on the Node
    • targetPort: Container port
  • Ingress along with ingress controller act like an Nginx load balancer. The DNS record should point to this load balancer IP to serve services across multiple Nodes