Kuberty.io
  • blog
Kuberty.io

Mesos

Home / Mesos
11Apr

Docker vs Mesos vs Kubernetes

April 11, 2022 admin Difference, Docker, Kubernetes

Comparing Docker vs Mesos vs Kubernetes

DevOps introduced a new and progressive way to work, which has evolved over time. The product runs deployments using container technology, where you can bundle applications as well as their dependencies together. Any platform can run these packages, irrespective of the infrastructure beneath. It is very efficient to handle a few containers running simultaneously, but what would you do if you have to handle thousands of containers at the same time without impacting their working?

It is important to handle all the deployments within containers seamlessly. As a result, managing them is crucial. Container orchestrators play a key role here. Container orchestration engines enable us to manage these containers across any platform. This article will focus on Docker Swarm, Mesos, and Kubernetes, their differences, and how to choose.

What are Container Orchestration Engines?

All of the containers including Kubernetes, Swarm, and Mesos belong to the DevOps infrastructure management tools class that is also known as the Container Orchestration Engines (COEs). These engines work as an abstraction layer between the resources and the containerized applications running using those resources.

These COEs help in providing a solution to bind all the resources from the data center in a single pool. The pool can be used for deploying various applications, including single applications and large scale ingestion and processing of data.

Each tool comes with a variety of feature sets but we will share some of the high-level features mentioned below.

  • Container scheduling is responsible for various functions like to start and stop the containers; to distribute the containers amongst the pooled resources; help in recovering the failed containers; and rebalance them from failed hosts to running ones, and to scale various applications via containers. You can do it either manually or automatically.
  • High availability ensures that the containers and the orchestration tools are highly available.
  • Health checks allow checking the health of the container or application.
  • Service discovery ensures that various services and applications are available faster. Distributed computing makes this possible by storing data over a network.
  • Load Balancing the incoming requests, whether it is generated internally within a cluster, or externally via outside clients.
  • They help in attaching various storage types (network, local) for containers in a cluster.

The orchestration engine provides quite a few additional functionalities other than the above-mentioned features. Below is the graph showing how the interest for the orchestration tools has changed over the years.

Docker vs Mesos vs Kubernetes

1. Docker Swarm

In 2015, Docker released Swarn as its native Container orchestration engine, written in Go language. It is available in version 1.12 and is the recommended version for using Swarm. You can use Swarm with Docker seamlessly as Swarm is well integrated with Docker API. The primitives that are used for a single Docker host can also be used with Swarm, which helps in managing the infrastructure of containers. In such a way, there is no need for configuring a separate orchestration engine for using Swarm.

Swarm is based on the YAML-based deployment model that uses Docker Compose. Apart from this, Swarm also helps in auto-healing the clusters, overlay networks with DNS, ensure high-availability with the help of multiple masters, and many more.

But, Swarm does not come with the support of native auto-scaling or provide external load balancing feature. If you want to scale, you need to do it manually or with the help of third-party solutions. However, Swarm supports ingress load balancing but if you want to do external load balancing, you need help from third-parties.

Docker components

  • Managers: it is an acting control layer that should be redundant in your architecture. Each manager needs an individual node for its deployment.
  • Discovery: it is the state and service discovery layer. You are allowed to set up your discovery services within the manager nodes or in independent node sets. Try to make these manager nodes redundant.
  • Worker: it is the major part where all your end-services will run along with your worker nodes. You can add as many workers you require at this layer. This layer represents horizontal growth.
  • Services: this layer will deploy the task and services.
  • Workloads: These are defined as the docker containers and commands contained in a service.

Container support

Docker Swarm already had the support for running the dockerized container on Linux environment. But, in February 2017, Swarm added support for running the Dockerized container on Windows systems as well.

Service composition

Docker Compose files are being used for defining the Docker Swarm services. These files are YAML-based and also used to bring up containers on a single machine that can also be run on several machines in Swarm.

Service discovery

Swarm uses a DNS service that helps for the service discovery by their name. You can use the ingress mode for exposing the services. In the ingress mode each host maps to the same port to a running service in Swarm.

2. Mesos

In July 2016, Mesos version 1.0 was released. But it was earlier discovered by PhD students at UC Berkeley. Mesos is written in C++, making it different from other container orchestration engines. It is based on a distributed approach for managing the data center and cloud resources. On mesos you can run multiple masters that use Zookeeper for tracking the state of the cluster among various masters.

You are allowed to run other container management frameworks on the top of Mesos that includes even Kubernetes and marathon. Also based on Apache Mesos is Mesosphere DC/OS, which is a well-known distributed datacenter OS. Mesos uses a modular approach to manage containers, giving users more control over scalability and the type of applications they can run.

Mesos is capable of scaling up to thousands of nodes and is used by big companies like Twitter, eBay, and many more. Apart from this, Apple also has a proprietary framework that is based on Mesos called Jarvis that powers Siri.

Some of the most important features are its support for several container engines, running multiple OSes, and providing an interactive web UI. It is easy to learn and understand, showing a steeper learning curve compared to the other COEs.

Mesos components

  • Master: it is a control layer that manages every container task.
  • Slaves: it handles all the workload and handles the deployment of every service.
  • Service discovery: it ensures the service discovery services using the Mesos-DNS or Marathon-lb features.
  • Load balancing: it uses the Marathon-lb HAproxy-based load balancer for balancing the workload in case of any failure.
  • Constraints: these are some restrictions that provide a way for fine controlling the deployment of the applications.
  • Metrics: it helps in monitoring information and providing them using the REST API to third party components.
  • Applications: these are the deployed services, PODS
  • REST API: these are the functions running using Mesos/Marathon REST calls.

Container support

Mesos can be used to run different applications like Kubernetes. On Mesos, you can directly run containers. Nevertheless, if you want a better working workflow for deploying applications inside containers, you will need to use a container-centric application such as Kubernetes.

Service discovery

Mesos ensures service discovery using the third-parties help. Mesos may not be able to provide service discovery on its own, but applications running on it, such as Kubernetes, might be able to.

3. Kubernetes

In June 2014, Google launched Kubernetes, written in Go language. It is an open-source project based on the container running experience with a wide and strong community managing it. Kubernetes is extensively supporting Docker as its container engine. Its deployment model is YAML-based, which helps in scheduling the containers on various hosts along with many other features.

Some of the major features of Kubernetes are auto-scaling, load balancing, management of volume of data, and secret management. Apart from this, you also get a web UI that allows you to manage and troubleshoot the cluster. Thus, Kubernetes can run on its own without the need for third-party support. Kubernetes uses services like Swarm, while Mesos uses pods.

You can even configure Kubernetes master as a high available cluster. Kubernetes shows a steeper learning curve making it easier to install and configure than other COEs. Thus Kubernetes is the most opted container orchestration engine in the market.

Kubernetes components

  • Master: it is the base of this orchestrator that allows you to run and expose the Kubernetes API. This API handles all the management tasks.
  • Discovery Layer: it is an etcd-based key/value store where you can register all your components. Your etcd services will use the same hosts as your Kubernetes master.
  • Nodes/Minions: most of the workload will run here. The services and pods will run inside the nodes.
  • Labels and Selectors: it helps in defining the way for organizing your objects. Apart from this, it also specifies how replication controllers and replica sets know to manage which pod.

Composition

Kubernetes has a base unit called pod which consists of one or several containers. You can schedule containers to run on the same host where they can communicate with each other using the loopback interface. It helps in updating the set of containers. All the resources of Kubernetes are defined in the configuration files, written in YAML or JSON formats. You can use the Kubernetes command “kubectl” for sending the config files to the Kubernetes cluster.

Service Discovery

Kubernetes comes with a DNS cluster, providing service discovery service. You can expose the running services using various methods including internal only, HTTP ingress, a node port running on every machine, or mapping to the external load balancer running on various cloud platforms.

Derivative works

Its most important derivative works are OpenShift Origin by RedHat and Tectonic platforms by CoreOS. These projects are based on Kubernetes and have offerings from its community.

Container Technologies

** features are being optionally provided by software or applications running on top of Apache Mesos.

Host Systems

Conclusion

There is no doubt on the importance of container technology in the DevOps process. It has benefitted many companies with its implementation. Running and managing large projects on complex infrastructure has become easy for developers. All this is possible due to container technology and various container orchestration engines.

The three widely used container orchestration engines have been mentioned in this article with all features and functionalities, providing you a complete sight of which COE to use and when.

Read more
06Apr

Mesos vs Kubernetes

April 6, 2022 admin Difference, Kubernetes

Mesos vs. Kubernetes

Today, every company is dependent on container technology. Containers are the small packages that bundle together your application along with their dependines to run on any environment irrespective of the underlying system and infrastructure. This has changed the working of the industries and is well adopted in the DevOps process. However, you can only manage a small number of containers running on various platforms. But when your company grows and wants to scale its applications accordingly, then it becomes difficult to handle thousands of containers.

This is where container orchestration comes into the picture to provide infrastructure to manage these thousands of containers by scheduling them accordingly. When we talk about orchestration, there are two main competitors present in the market- Mesos and Kubernetes.

But deciding between both the orchestration players is difficult as they both have their benefits. In the below article, we will look at the competencies that both the players offer and we can come to a conclusion to opt for one of them based on our requirements.

What is Kubernetes?

Google launched Kubernetes in 2014 as a container orchestration tool which is also known as K8s. It is a container orchestration platform well-suited for cloud-native computing services. Google launched it as its Container as a Service offering and now it is known as Google Container Engine.

It is an open-source system that helps in automating various tasks and deployments, scaling and managing the containerized applications. It has got extended support from other platforms like OpenShift, Azure, and many more. It has a simple and modular API core that offers its users with a powerful tool for container orchestration.

Architecture

Two main parts of this architecture are Kubernetes Master and Kubernetes Nodes. We will discuss the above parts in detail below:

Kubernetes Master– it manages and maintains the desired state of the cluster. It also helps in managing all the cluster nodes. The Master consists of three different processes.

  • kube-apiserver: this service helps in managing the entire cluster. It includes REST operations. It also validates ,and updates the Kubernetes objects, ensuring authentication and authorization.
  • kube-controller-manager: it is the daemon process that embeds the main control loop along with the Kubernetes. It will carry out the required modification for matching the current state of the clusters as per the desired state of the cluster.
  • kube-scheduler: This service scans the unscheduled pods, binds them together to the respective nodes and that completely depends upon requested resources and other constraints.

Kubernetes Nodes- these nodes are various machines or devices that run on the container. Every node is bundled with necessary services for running the container.

  • kubelet: it is a node agent that ensures that each container should be running fine.
  • kube-proxy: it is a network proxy that runs on every single node for performing a simple TCP, UDP, SCTP stream that forwards or round-robin forward across backend sets.
  • container runtime: it is a runtime software inside the pods that run and manages the components to run containers. There can be many container runtimes for Kubernetes that include the most widely used, Docker runtime.

Kubernetes Objects

These are the persistent Kubernetes entities reflecting the cluster’s state at any point of time. Below are the commonly used Kubernetes objects.

  • Pods: it is an execution unit of the Kubernetes containing more than one container. Every container within the pod is hosted in the same environment.
  • Deployment: it helps in deploying pods within the Kubernetes system, it offers various features like continuously reconciling pods’ current state with the desired state.
  • Services: it helps in providing an abstract way for exposing a group of pods, and this grouping is based on selectors that target the pod labels.

What is Mesos?

UC Berkeley developed an open-source cluster manager known as Mesos. It comes with various APIs that help in managing the resources and scheduling the clusters. You can use this platform for managing containerized and non-containerized applications in a distributed manner. It can easily scale very large clusters including thousands of hosts.

It works on a distributed approach for managing the clusters. It ensures a great flexibility while scaling the applications. Mesos can run many container management frameworks simultaneously, including Kubernetes, Apache Aurora, Mesosphere Marathon, and many more. It helps in abstracting various useful resources of the data center into a single pool. Apart from this, it helps in collocating the diverse workload and automating the day-two operations. It even offers ultimate extensibility for running new applications.

Architecture

The Mesos architecture comprises Master agent, and Application frameworks. We will discuss them in detail.

Frameworks: These are the applications that are appropriately distributed and manage the execution of tasks or workload like Hadoop or Storm. Mesos Framework consists of two main components:

  • Scheduler: it helps to register with the Master Node so that it can start offering resources.
  • Executor: this process runs on the every agent nodes that is responsible for running the framework’s tasks

Mesos Agents: these agents actually run the tasks. Each agent of the Mesos specifies the available system resources that includes CPU, storage, and memory and displays it to the master. Once agents get their individual task from the specified master, they will start allocating the required resources to the framework’s executor.

Mesos Master: it helps in scheduling the tasks. These tasks are received from the Frameworks on one of the available and running agent nodes. Master will provide the required resources to Frameworks. Then the framework scheduler will use the resources for running the tasks.

Characteristics of Kubernetes and Mesos

Characteristics Kubernetes Mesos/Marathon
Initial Release Date July 2015, v1.16 in Sept 2019 July 2016, Stable release August 2019
Deployment YAML based Unique format
Stability Quite mature and stable with consistent updates Mature
Design Philosophy Pod-based and resource-groupings Cgroups and control groups based in Linux
Images Supported Supports Docker and rkt Supports mostly Docker
Learning Curve Steep Steep

Tabular difference between Mesos and Kubernetes

Aspect Kubernetes Mesos
Types of Workloads cloud-native applications, containerized workloads big data, cloud-native applications,

containerized and non-containerized apps.

Application Scalability constructs each application layer is specified as pods that can

be scaled easily. It supports manual and automated scaling.

it scales an individual group along with its

dependencies.

High Availability pods are distributed among various worker nodes. applications are distributed among

various slave nodes.

Load Balancing the load balancer can expose the pods applications are available via Mesos- DNS,

acting as a load balancer

purpose ideal for new users and companies suitable for large systems
service discovery pods use intra-cluster DNS for searching the services it uses DNS or reverses proxy for searching

the services.

Difference between Mesos and Kubernetes

We have got enough context on both container orchestration platforms. But they differ in many aspects and without understanding where they differ from each other, we won’t be able to make an informed choice. This difference will keep everything in the right perspective. Below are the major differences between Mesos and Kubernetes.

Supported workloads

With Mesos, you can handle any type of containerized and non-containerized workloads or applications. Well, this depends on what framework you are using to run the applications. Some support containerized applications seamlessly like Marathon.

Kubernetes, on the other hand, supports containerized workloads or applications specifically. Typically, it is used with Docker containers. Currently, Kubernetes does not support multiple workloads, but it will in the future.

Scalability

With Mesos, the scalability is supported via the user interface. It allows you to scale the application groups automatically along with all dependencies.

On the other hand, Kubernetes handles every execution within pods which can be scaled easily. That is why pods are specified as the deployments. You can scale manually or automatically depending on the task.

Handling high availability

The application instances of Marathon are distributed across every agent of Mesos ensuring availability. In the same way, Kubernetes has pods that are distributed across several nodes to ensure availability.

Upgrades and rollbacks

Any change to the application definition is considered to be a deployment. With the deployment, you can start, stop, run, and scale applications. With Mesos, you can even roll back to run new versions with appropriate updates. It only requires to run the deployment with the updated definition.

While on the other hand, Kubernetes also supports the upgrade and rollback of the deployments. You only have to replace the old pods with new pods containing new definitions. Kubernetes maintains the rollback history by default, making it easier to roll back to an older version if required.

Logging and monitoring

With Mesos, you can easily scan all the components of the cluster. It provides you with data regarding health and other important metrics. You can use various APIs for querying and aggregating the captured data. You can use external tools for collecting the data metrics.

While on the other hand, Kubernetes provides detailed important information about all objects. It uses an external tool for gathering the metrics.

Conclusion

Containerization has been evolving constantly, providing a great working experience within the critical infrastructure handling many containers or applications. Container orchestration helps to handle such volume of containers seamlessly. It is important that you choose the right tool for your environment. We have mentioned two main competitors of Orchestration tools in the market. It is up to you, which tool you choose and implement it within your company.

Read more
corporate-one-light
+1 800 622 22 02
info@scapeindustries.com

Company

Working hours

Mon-Tue

9:00 – 18:00

Friday

9:00 – 18:00

Sat-Sun

Closed

Contacts

3 New Orchard Road
Armonk, New York 10504-1522
United States
915-599-1900

© 2021 Kuberty.io by Kuberty.io