Kubernetes(k8’s) case study
In this blog , I am going to discuss about Kubernetes and “how companies are using Kubernetes (k8's)”. So, lets start;
2253 companies reportedly use Kubernetes in their tech stacks, including Google, Shopify, and Slack.
What is Kubernetes?
Kubernetes is an open source container orchestration platform that helps manage distributed, containerized applications at massive scale. You tell Kubernetes where you want your software to run, and the platform takes care of almost everything else.
This detail shows how Kubernetes helps organizations build applications and manage containers on site and across hybrid cloud environments. It provides:
· The origin, functions, and benefits of Kubernetes.
· Basics of modern application development and container management and orchestration.
· A look at basic Kubernetes architecture.
· Factors for you to consider when adopting Kubernetes.
Information about how Red Hat OpenShift can help you simplify and scale Kubernetes applications.
What is container orchestration?
Container orchestration automates the deployment, management, scaling, and networking of containers. Enterprises that need to deploy and manage hundreds or thousands of Linux containers and hosts can benefit from container orchestration.
Container orchestration can be used in any environment where you use containers. It can help you to deploy the same application across different environments without needing to redesign it. And microservices in containers make it easier to orchestrate services, including storage, networking, and security.
Managing the lifecycle of containers with orchestration also supports DevOps teams who integrate it into CI/CD workflows. Along with application programming interfaces (APIs) and DevOps teams, containerized microservices are the foundation for cloud-native applications.
Use container orchestration to automate and manage tasks such as:
· Provisioning and deployment
· Configuration and scheduling
· Resource allocation
· Container availability
· Scaling or removing containers based on balancing workloads across your infrastructure
· Load balancing and traffic routing
· Monitoring container health
· Configuring applications based on the container in which they will run
Keeping interactions between containers secure
Main components of Kubernetes:
Cluster: A control plane and one or more compute machines, or nodes.
Control plane: The collection of processes that control Kubernetes nodes. This is where all task assignments originate.
Kubelet: This service runs on nodes and reads the container manifests and ensures the defined containers are started and running.
Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources.
How does container orchestration work?
When you use a container orchestration tool, such as Kubernetes, you will describe the configuration of an application using either a YAML or JSON file. The configuration file tells the configuration management tool where to find the container images, how to establish a network, and where to store logs.
When deploying a new container, the container management tool automatically schedules the deployment to a cluster and finds the right host, taking into account any defined requirements or restrictions. The orchestration tool then manages the container’s lifecycle based on the specifications that were determined in the compose file.
You can use Kubernetes patterns to manage the configuration, lifecyle, and scale of container-based applications and services. These repeatable patterns are the tools needed by a Kubernetes developer to build complete systems.
Container orchestration can be used in any environment that runs containers, including on-premise servers and public cloud or private cloud environments.
The Kubernetes deployment object lets you:
· Deploy a replica set or pod
· Update pods and replica sets
· Rollback to previous deployment versions
· Scale a deployment
· Pause or continue a deployment
Docker, Microsoft Azure, Ansible, Vagrant, and Google Compute Engine are some of the popular tools that integrate with Kubernetes.
Here’s a some of tools that integrate with Kubernetes.
Real World Use Case:
Pestro & Kubernetes:
Presto is a high performance, distributed SQL query engine for big data. Its architecture allows users to query a variety of data sources such as Hadoop, AWS S3, Alluxio, MySQL, Cassandra, Kafka, and MongoDB. One can even query data from multiple data sources within a single query.
Challenges: Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
SOLUTION: Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
Netflix chose to build their own container orchestration system.
Take web-scaler Netflix, for example. Netflix remains a prime example of an organization that leverages public cloud for extensive operations. Most of Netflix’s applications have mainly run within virtual machines, but the firm recently went on a journey toward providing containers as an option within their infrastructure. Here four lessons to draw from their experience.
Netflix is a bottom-up organization. The governance drove many of their container orchestration design decisions. Operations didn’t dictate what applications must go in containers — it remained up to the individual application teams to determine which of their services go into containers and which applications remained in virtual machines.
Enterprises should always start with governance when considering a container strategy. I’ve seen many organizations deploy cloud-native technology only to see it go unused. The primary challenge is culture. Either there’s no incentive to adopt the technology, or no sponsorship to force adoption. In Netflix’s case, the container team motivation began with providing value to their application community.
2. Kubernetes vs. Titus
Netflix chronicled their container journey in a white paper. Running containers at scale requires orchestration, and Netflix started their journey near the beginning of the Kubernetes open source project. Netflix had to decide if it would build its own orchestration platform or adopt an existing platform.
Netflix chose to build a dedicated container orchestration platform called Titus. While Netflix claims most organizations look to write greenfield applications on new container platforms such as Kubernetes, its team wanted to consider existing applications as well. Therefore, Netflix chose to build their Titus container management system on top of Mesophere.
Today, Kubernetes has broad support for brownfield applications. For example, Docker Swarm now integrates Kubernetes into Swarm clusters. Also, operations teams can deploy legacy apps into Docker containers and deploy the containers to Kubernetes clusters.
3. Container networking
Organizations have to give considerable thought to container networking. Networking is especially important as organizations design application interactions between legacy applications. Netflix’s Titus enabled container-to-container networking to conserve IP address space. The solution also allows placing containers directly on the routable network address space of existing applications.
A common approach within enterprise deployments of containers is to adopt a network overlay. Every major network vendor, such as VMware, Cisco, Juniper, Extreme Networks, and Big Switch offer Kubernetes container support. Each solution plugs into Kubernetes to enable both overlay support and security support. And applications can use native Kubernetes network APIs to control security policies.
4. Public cloud
As noted in an earlier TechRepublic post, Netflix is an extremely large consumer of Amazon Web Services (AWS). Although, integration with AWS Identity and Access Management (IAM) proves an operational challenge. In Titus, Netflix created a proxy service that enables legacy applications to remain unchanged. Titus leverages IAM roles to enable a single Titus node to adopt an IAM role for the containers running on the node. As part of workload placement, Titus must take IAM security into consideration.
Another consideration is leveraging EC2 instances as container hosts. Prior to container adoption, Netflix was challenged with the inefficiency of EC2. Containers allow Netflix to slice EC2 instances into smaller units by placing multiple workloads in a single EC2 instance. Netflix has seen a higher level of efficiency as a result.
Thank you for visiting this article….!!!!!!!