photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for your photos and banter. Photog first is our motto Please refer to the site rules before posting.

Administered by:

Server stats:

244
active users

#k8s

0 posts0 participants0 posts today
Replied in thread

@rjbs YAML is really great for simple configuration.. it's succinct and very readable if it fits into a viewing window of like 80 chars wide where every screen full of data (40 or so lines) contains a top-level item, i.e., no top-level element is more than 40ish lines.

Once it gets more complicated than that (wtaf #k8s??), it's an exercise in pain and suffering.

Anybody worked out if it's possible to access AWS Certificate Manager certs in EKS Kubernetes as a TLS Secret? (I need to terminate in the pod not the LoadBalancer to access SNI)

It feels like it should be possible with the Secrets Store CSI driver with the AWS plugin, but it looks it only has access to AWS Secrets Manager. I don't really want to have to export and import every time they need renewing

#TLS#AWS#EKS

My #homelab wiki is getting really complicated to organise and write for haha, but it's definitely getting more interesting topics like more #RaspberryPi stuffs, #Docker, and some cool stuffs like #OpenMediaVault and #HomeAssistant. I'm taking my sweet time to update them 'properly' and hope it'll all link/piece together sensibly in the end.

This is partially thanks to me embracing the fact that I just don't (yet) have the resources for a
standalone 'mega' homelab (#Proxmox & #Kubernetes based) server cluster that I could simply throw everything to it, hence supplementing that setup with tinier SBC-based servers. Gives me a bit of peace of mind too that things are now more 'spread out'.

The most interesting bit will probably be when I manage to explore replicating a mini version of my
#RKE2 Kubernetes cluster, on a single (or at most, two) Raspberry Pi node - maybe based on #k3s, assuming that's better. I'm just not there yet cos I'm kinda reluctant if using something like #k8s on RPi makes much sense since I'm expecting a lot of resources will be wasted that way, when hosting on Docker alone (i.e. on #Portainer) should be leaner.

🔗 Anyway, if y'all wanna keep an eye on it: https://github.com/irfanhakim-as/homelab-wiki

Wiki about everything Homelab. Contribute to irfanhakim-as/homelab-wiki development by creating an account on GitHub.
GitHubGitHub - irfanhakim-as/homelab-wiki: Wiki about everything HomelabWiki about everything Homelab. Contribute to irfanhakim-as/homelab-wiki development by creating an account on GitHub.

Kubernetes curly: deployment with autoscaling, each pod depends on and occasionally writes-to an external database.
To minimise database reads, an in-memory cache is implemented in the application.

However, when a pod writes to the database it should invalidate that key in the cache for all pods.
This works fine for the local cache, but how to distribute that invalidation?

I suppose we could use a statefulset and then hit the service for each other running pod but that seems... messy.

Continued thread

But what their abstraction allowed through was minimal and unprincipled. Had they let their abstraction freely leak into the Linux underlying implementation, but with Docker providing the initial "training wheels" config to get started, I think we would have had more people pushing the envelope. And that would have led Docker to incorporate more networking features at their OS-agnostic layer. A positive feedback loop.

Instead, this feedback is instead found in #k8s and CNI plugins.

12/n

I generally rather enjoy working with #k8s #helm charts to deploy complex server side apps.

The lack of silly walks is satisfying.
Helm: "May I have my Configuration Burger so I can deploy your app please?"

Me: Here you go! 600 lines of YAML! OPEN WIDE!

Only problem is that when you're 300 lines in editing all the values to be correct it feels like THIS point people used to talk about hitting while building electronics projects:

Replied in thread

Putting aside the question of *which* node should be advertising a given service via BGP - *what* would it advertise? Services /can/ have multiple IPs but that's not usually the case. It's primarily a single ClusterIP to indirect backends right?

Okay so *somehow* the IP gets advertised but what range do you put on it?

The entire service CIDR sure is convenient but then what? All services hit the same node and get converted to in-cluster IP forwarding? Can you even advertise a range with multiple gateways? Probably. But this is also playing roulette with nodes not having a backend on them. Even if you made the route advertisement only the nodes with backends for the service, it'd be quite a weighty way to do the indirection, and you're now moving that indirection *outside* the cluster - which is cool but seems to violate the idea that services should be internal-only.

@hugo Halp

Continued thread

So the default kubernetes service has no `selector` in spec, which, according to the v1 `Service` spec:

> If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify.

But fetching endpoints (or endpointslices rather) yields none for the default service. This would explain the CNI not doing anything about the Service. Does not explain the lack of service routing for ones that *do* have endpointSlices

Am I missing some Cilium option to make it manage the endpoint?