When using containers to deploy applications, most of the time you want those containers to be ‘data stateless’. The container should not store any important data inside it as it could potentially be lost if the container was accidentally killed, crashed or simply removed. On the other hand if you wanted to run tools like Jenkins or Gitlab then you would most definitely want to have persistent data. I have a homelab set up at home running VMWares ESXi, nothing over the top and within a student-ish budget:
- Supermicro X10SLL-F
- Intel Xeon E3-1231 v3
- 32GB RAM
- 2x 4TB WD RED (NAS and backup)
- 1x 2TB WD RED (CoreOS data)
- 1x 120GB Kingston SSD (Virtual machines)
The two main virtual machines I run at this current time are FreeNAS and CoreOS.
Setting up the storage was simple, later on I had trouble with permissions but it works now though I am not too sure if it is 100% correct. After creating the storage you need to go to the sharing tab and click to add a new NFS share. Here you select which disk you want to share, remember the actual path of the disk eg. ‘/mnt/coreos’ as you will have to use it to attach the share. This means that I can access the containers contents without having to eg. ssh into CoreOS, or do some other work around where the disk is attached directly to CoreOS by ESXi.
Setting up the NFS share in the cloud-config for CoreOS was actually really simple, I’ve attached a gist below of what my current config is.
A benefit of doing persistent storage this way is that I can spin up a few more virtual machines of CoreOS and be able to move containers around the various hosts without losing any data. Next thing planned is getting Kubernetes on lockdown, I’ve been playing around with Minikube but now I want to use it on my homelab.