Migrate ArgoCD to Cilium Ingress
Summary:
An easy way to migrate ArgoCD running on an RKE2 cluster from Ingress NGINX to Cilium Ingress.
Rancher Kubernetes Engine 2 (RKE2) is a fully conformant Kubernetes distribution focused on security and compliance, ideal for edge and government environments.
View All TagsSummary:
An easy way to migrate ArgoCD running on an RKE2 cluster from Ingress NGINX to Cilium Ingress.
Summary:
Sveltos v1.0.0 release introduced a way to manage Kubernetes clusters in restricted network, environments behind a firewall or edge locations. Follow along to understand how the Sveltos Pull Mode works and how it can be deployed.
Welcome to part 4 of the dual-stack series! In parts 1, 2, and 3, we walked through how to set up dual-stack networking on a Proxmox server using our Internet provider. We also showed you how to deploy RKE2 Kubernetes clusters and share both IPv4 and IPv6 services across them. Now, in the final part of the series, we are diving into some of the most commonly used features of Cilium for a home lab setup! Let’s get started!

Welcome to part 3 of the dual-stack series! In part 1 and part 2, we discovered how to enable dual-stack on a Proxmox server using our Internet provider and deploy RKE2 clusters. In today's post, we continue our journey and enable a Cilium Cluster Mesh between two RKE2 clusters. The goal is to share IPv4 and IPv6 services between the different clusters effortlessly. Let’s dive in!

Welcome to part 2 of the dual-stack series! In part 1, we covered how to enable IPv6 Prefix allocation using pfsense on Proxmox with Fritz!Box as a home router. The setup allows virtual machines in a dedicated interface to receive an IPv4 and an IPv6 address. If you have completed part 1, you can continue with the dual-stack RKE2 setup powered by Cilium.

Welcome to the first post of the brand new Kubernetes Troubleshooting Insights section! The series of blog posts will share helpful information and troubleshooting tips for issues that might appear in a Kubernetes environment. The posts focus on real-life scenarios from either test, staging, or production environments.
In today’s blog post, we’ll explore an issue with CoreDNS setup on RKE2 clusters. Cilium CNI with Hubble were enabled for this setup. Let’s jump right in!

In a previous post, we covered how to create an RKE2 cluster on Azure Cloud using the cloud-free credits from the Rancher UI. As this is a convenient approach to get started with Rancher, in today's post, we will demonstrate how to use OpenTofu to automate the deployment.
OpenTofu is a fork of Terraform. It is an open-source project, community-driven, and managed by the Linux Foundation. If you want to get familiar with what OpenTofu is and how to get started, check out the link here.
We will also show how easy it is to customise the Cilium configuration. Plus, we will enable kube-vip for LoadBalancer services using HCL (HashiCorp Configuration Language).

For the last couple of days, I have been working on a new use case installing RKE2 clusters powered with Cilium by Azure Cloud. The requirement at hand was to use a Rancher instance and from there start deploying RKE2 clusters. I found that the official Rancher documentation has outdated instructions for pre-configuring Azure Cloud.
In today's blog post, we will outline the steps to set up Azure cloud-free credits for deploying RKE2 clusters with Cilium. Additionally, we will cover any limitations that come with the free credit concept.

Working with on-prem RKE2 clusters, I noticed many issues in forming a Cilium cluster mesh between clusters in an automated way.
In this post, I will walk through a step-by-step process to get a Cilium cluster mesh up and running. We will cover the problems I ran into along the way. The goal is to follow a GitOps-friendly approach, with no need for the Cilium CLI. We will use Helm and kubectl for the setup.