Comparing Cloud deployment to Kubernetes and Docker deployment

Hi Docker Community,

Hi everyone, have a newbie question for you. So I have been working as a MuleSoft developer on cloud deployment for a few years now and understand the basics of application deployment, workers, VPCs, load balancers, scaling and so on. But I really want to understand Kubernetes, Dockers and containers deployments and have been googling for a while. So my question is

  1. I see Ubuntu in many explanations and why?
  2. Can you deploy one application in one container or not, I know you can only deploy one app in one worker?
  3. Are Kubernetes and Docker used on-promise or can they also be used on cloud deployment?
  4. I know cloud deployment does not support domain applications and resource sharing concepts, so is this something that can only be achieved with Kubernetes?
  5. How do you enhance your applications performance in regards to vertical and horizontal scaling? I mean is it the same as cloud where you increase worker number or vCore size?
  6. Do each environment(Dev, Test) have its own Kubernetes and docker, I am guessing yes but I am not sure.

I know these might be a lot of basic questions to ask but it will really help me out.

Thanks for your attention. I’m looking forward to your reply.

Here are some presentations to get started with k8s (link).

Probably, because it’s popular, easy to use and has a huge community?

I am not sure what your intention is with this question, but you deploy containers based on images. Typically, an image encapsulates a single application component or service, and all its dependencies. Often applications consist of multiple containers that communicate over the network.

Please define what you mean by worker. A Kubernes/Swarm node supposed to run workloads is typically called worker. They can run multiple containers on a node, so can docker-ce.

What is the motivation of this question? Technically and legally both can be operated wherever you want.

If you plan to run Kubernets on a Hypsercaler: do not set up the control plan on compute nodes yourself. All Hyperscalers provide a managed Kubernetes cluster (e.g. EKS on AWS, or AKS on Azure), which provides the k8s control plane way cheaper, reliable/resilient, with way less effort.

Please be more specific what you mean with domain application. You can set resource constraints for cpu and ram of a container, regardless wheter it’s Docker, Swarm or Kubernetes.

Horizontal scaling: use replicas to scale out the deployment (of course your application must be able to handle running multiple replicas at the same time!)
Vertical scaling: set larger values for the resource constraints to scale up the deployment

It’s up to you and your organizational constraints. Usually compliance regulations forbid to mix non-prod and prod workloads. The recommendation is to use different Cluster instances for each environment.

1 Like