Hi,
I'm actually really sorry I don't post too much and I think when I do I am making the comments a bit short and its causing confusion. I hope this clears things up and is a bit helpful.
To explain the context, I find there seems to be a lack of clarity around K8s. Lets just say, for this post's purposes, I am a _user_ of k8s. That is, I don't actually run the cluster at all, I don't manage its storage, I am operating as a User without full admin access.
I also work in VMware who has the whole Tanzu thing going on so what happened was internal IT set up a K8s cluster for production hosting (maybe it was dogfooding or something I'm not really involved with that team, user only as I said).
I got frustrated when figuring out how to use it however. A bunch of online material is about setting up the K8s infrastructure/cluster, rather than using it as a Dev.
I hope this makes a bit more sense when I said redis was practically zero config.
To explain the setup. I code a python app, and I use docker compose and a single docker-compose.YAML. This YAML gives the build instructions for my app, and minimally (by this I mean with absolute minimal config options) also sets up a postgresDB, RabbitMQ and Redis.
I consider this minimal as most of it is boiler plate and I'm just configuring its resources.
So for dev when I want a local spin up I docker-compose build, docker-compose up.
As you can see above,
volumes:
- ~/.docker-conf/redis/data/:/data/
Gives it persistent storage across builds and deploys.
So to my mind this was pretty easy so far. Then I looked at what was needed to deploy on K8S and I nearly puked. Sorry there is no way I was touching that mess of yaml.
So I just use kompose-convert which uses the single docker-compose.yaml to auto generate all the little yaml babies needed for deploying to staging and prod (the K8S cluster).
That's it. Regarding persistent storage which I use, I define that with the docker Kompose label in the main docker-compose.yaml. The goal just being a single file to config everything.
When I kill the deployment I just don't kill the persistent disks, and then deploy all the auto generated yamls at once (including the persistent disks, which wont overwrite them if they exist)
(note I haven't made any edits on any of these yamls they are all auto-generated)
To me this is minimal config compared to trying to integrate with a third party outside of my app network (needing corporate firewall exceptions etc etc). But yea there is internal hosted redis offerings I think but even then I made a pass as the above is just very easy and neat.
Also the persistent disks are all backed up in the background.
I guess it's minimal config when you already have a nice K8S setup ready to use would be a fairer statement :)
Hope this is useful to someone just trying to get a dam app running!
To explain the context, I find there seems to be a lack of clarity around K8s. Lets just say, for this post's purposes, I am a _user_ of k8s. That is, I don't actually run the cluster at all, I don't manage its storage, I am operating as a User without full admin access.
I also work in VMware who has the whole Tanzu thing going on so what happened was internal IT set up a K8s cluster for production hosting (maybe it was dogfooding or something I'm not really involved with that team, user only as I said).
I got frustrated when figuring out how to use it however. A bunch of online material is about setting up the K8s infrastructure/cluster, rather than using it as a Dev.
I hope this makes a bit more sense when I said redis was practically zero config.
To explain the setup. I code a python app, and I use docker compose and a single docker-compose.YAML. This YAML gives the build instructions for my app, and minimally (by this I mean with absolute minimal config options) also sets up a postgresDB, RabbitMQ and Redis.
Here's what I mean by minimal:
I consider this minimal as most of it is boiler plate and I'm just configuring its resources.So for dev when I want a local spin up I docker-compose build, docker-compose up.
As you can see above,
Gives it persistent storage across builds and deploys.So to my mind this was pretty easy so far. Then I looked at what was needed to deploy on K8S and I nearly puked. Sorry there is no way I was touching that mess of yaml.
So I just use kompose-convert which uses the single docker-compose.yaml to auto generate all the little yaml babies needed for deploying to staging and prod (the K8S cluster).
Script is essentially:
That's it. Regarding persistent storage which I use, I define that with the docker Kompose label in the main docker-compose.yaml. The goal just being a single file to config everything.When I kill the deployment I just don't kill the persistent disks, and then deploy all the auto generated yamls at once (including the persistent disks, which wont overwrite them if they exist)
Deploy script:
Kill script: (note I haven't made any edits on any of these yamls they are all auto-generated)To me this is minimal config compared to trying to integrate with a third party outside of my app network (needing corporate firewall exceptions etc etc). But yea there is internal hosted redis offerings I think but even then I made a pass as the above is just very easy and neat.
Also the persistent disks are all backed up in the background.
I guess it's minimal config when you already have a nice K8S setup ready to use would be a fairer statement :)
Hope this is useful to someone just trying to get a dam app running!