Just to give some additional details on what Douwe said:
(1) External Storage:
If you’re just wanting to mount a block device from your SAN, then my recommendation is to perform the installation as normal using local storage, then once it’s all done, stop docker on the node, move /var/lib/docker to /var/lib/docker-old and mount the remote block device to /var/lib/docker. Then, copy the contents of docker-old to /var/lib/docker (just make sure you have fstab setup), and restart docker.
If you want shared storage for containers, then for that we have support for using NFS. With that, all docker settings and images are stored locally, but the volumes for containers are mounted directly via NFS. Nothing you need to do on the nodes directly, just make sure they can access the NFS server. You will configure that in the admin: Settings → Regions → (click manage on the availability zone) → wrench icon. Scroll down to Volume storage driver, choose NFS, and scroll down the NFS storage options and add IP’s accordingly. The NFS remote path is the path on the NFS server. Make sure NFS mounts are setup correctly there as well.
(2) Horizontal Scaling:
If the container you’re scaling horizontally has local volumes, then all horizontally scaled containers will be on the same node.
If the container has no volumes, or the volumes are accessible via NFS, it will be placed evenly throughout the availability zone.
(3) Capacity Management
Placement is (mostly) automated by CS. We try to be somewhat smart about how we do it, however you can control some high-level policies.
Fill by QTY or Resources allocated: Evenly distribute containers based on how many are present within a node, or take the package size and try to place containers based on the resources allocated on that node.
Fill until a node is full, or evenly distribute across nodes (balance).
Both of those settings are configurable under Settings → Regions → wrench icon
(4) CPU / Memory
Under the hood it’s just docker (using cgroups), so unlike a hypervisor, you can’t really allocated a specific core, or even limit to a single core. What we do is kind of cheat and use a CPU share system. It’s the same thing when you run docker from the CLI and pass the --cpus=1 flag, we’re making a best guess estimate to limit a container to approximately X number of cores. This is why you can have packages with fractional cores, because it’s all shares / time on the cpu in nanoseconds under the hood.
(5) maintenance mode
With local storage, no, there is no way to do that.
With NFS, you can put a node into maintenance mode, which will evacuate the node automatically.
In practice, most CS users run the platform in a highly available virtualization platform, so the only maintenance you’re really doing that would disrupt the containers are kernel and docker updates, which are pretty quick. For hardware maintenance, our users will just live-migrate the VM to another host.