Looks like VMware with vsphere (and vcenter?) is able to allocate resources (vcpu for instance) across hosts for a single VM ? Is this possible with kubernetes for containers ?
Can docker swarm pool vcpu between multiple hosts/nodes for one container ?
I am really intrigued by this statement -
“You can dynamically change resource allocation policies. For example, at year end, the workload on Accounting increases, and which requires an increase in the Accounting resource pool reserve of 4GHz of power to 6GHz. You can make the change to the resource pool dynamically without shutting down the associated virtual machines.”
Each physical host is 4Ghz, but this doc says it can pull 2Ghz out of the second host. Is it possible because of ESXi ?
My general experience with Docker and clustering software is that it’s better to build smaller containers that perform single tasks, and then run many replicas of that container. A past project I worked on implemented a work queue and our version of the worker tried to run as many concurrent threads as host CPUs, but when we ported it to Docker, it became much easier to manage to run the worker as single-threaded, but to run many copies of it. In Kubernetes it is easy to change the number of replicas of a given “pod” (
kubectl scale deployment).
None of the Docker-based solutions I’m personally familiar with have any sort of physical resource pooling or the ability to live migrate processes across nodes. In terms of system architecture the ideal from what I’ve found is to have all state stored somewhere “outside the cluster” and to run many truly stateless containers, that depend on your external database/job queue/…