Migrate ArgoCD to Cilium Ingress
Summary:
An easy way to migrate ArgoCD running on an RKE2 cluster from Ingress NGINX to Cilium Ingress.
A set of practices that combines software development and IT operations.
View All TagsSummary:
An easy way to migrate ArgoCD running on an RKE2 cluster from Ingress NGINX to Cilium Ingress.
Summary:
This post picks up where the Flux Operator blog left off, diving deeper to demonstrate the power of Sveltos and Flux for Kubernetes add-on deployment and management. Join the club as we explore what next-level Kubernetes deployments and management look like in action!
Summary:
Learn how to install and use the Flux Operator in a Kubernetes environment and connect it to an on-prem GitLab instance.
Summary:
Learn how to integrate Cilium, Gateway API, cert-manager, let's encrypt and Cloudflare to dynamically provision TLS certificates for ArgoCD.
Are you ready to simplify how your Platform team spins up and down development environments while improving DevX? In this post, we show how Cluster API(CAPI), Sveltos, and Cyclops work together. They automatically create Kubernetes environments. This setup lets developers easily interact with and manage their applications. It is not magic, it is the power of Sveltos combined with the right tooling!

How easy is it to handle Day-2 operations with existing CI/CD tooling? Sveltos provides the ability to perform not only Day-1 operations but also helps platform administrators, tenant administrators and other operators with Day-2 operations. For example, we can use the HealthCheck and the ClusterHealthCheck features to not only watch the health of a cluster but also collect information from the managed clusters and display them in the management cluster.
In today's blog post, we will cover a way of deploying Cilium as our CNI alongside Cilium Tetragon for observability. We will then continue with a simple TracingPolicy deployment to capture socket connections and then use Sveltos to display the tracing results back to the management cluster.
The goal of the demonstration is to showcase how Sveltos can be used for different Kubernetes cluster operations based on the use case at hand.

In previous posts, we outlined how Sveltos allows Platform and tenant administrators to streamline Kubernetes applications and add-on deployments to a fleet of clusters. In today's blog post, we will take a step further and demonstrate how easy it is to target and update a subset of resources targeted by multiple configurations. By multiple configurations, we refer to the Sveltos ClusterProfile or Profile Custom Resource Definitions (CRDs). The demonstration focuses on day-2 operations as we provide a way to update and/or remove resources without affecting production operations.
This functionality is called tiers. Sveltos tiers provide a solution for managing the deployment priority when resources are targeted by multiple configurations. They fit into current ClusterProfile/Profile definitions. They also set the deployment order and easily override behaviour.
Today, we will cover the case of updating the Cilium CNI in a subnet of clusters with the label set to tier:zone2 without affecting the monitoring capabilities defined in the same ClusterProfile/Profile.

In a previous post, we covered how to create an RKE2 cluster on Azure Cloud using the cloud-free credits from the Rancher UI. As this is a convenient approach to get started with Rancher, in today's post, we will demonstrate how to use OpenTofu to automate the deployment.
OpenTofu is a fork of Terraform. It is an open-source project, community-driven, and managed by the Linux Foundation. If you want to get familiar with what OpenTofu is and how to get started, check out the link here.
We will also show how easy it is to customise the Cilium configuration. Plus, we will enable kube-vip for LoadBalancer services using HCL (HashiCorp Configuration Language).

Have you ever wondered how to dynamically instantiate Kubernetes resources before deploying them to a cluster? What if I tell you there is an easy way to do it? Sveltos lets you define add-ons and applications using templates. Before deploying any resource down the managed clusters, Sveltos instantiates the templates using information gathered from the management cluster.
In a previous post, we outlined a step-by-step approach to forming a Cilium cluster mesh between two clusters. In today's post, we will demonstrate how the Sveltos templating is used to deploy a Cilium cluster mesh dynamically in one go.

For the last couple of days, I have been working on a new use case installing RKE2 clusters powered with Cilium by Azure Cloud. The requirement at hand was to use a Rancher instance and from there start deploying RKE2 clusters. I found that the official Rancher documentation has outdated instructions for pre-configuring Azure Cloud.
In today's blog post, we will outline the steps to set up Azure cloud-free credits for deploying RKE2 clusters with Cilium. Additionally, we will cover any limitations that come with the free credit concept.

Working with on-prem RKE2 clusters, I noticed many issues in forming a Cilium cluster mesh between clusters in an automated way.
In this post, I will walk through a step-by-step process to get a Cilium cluster mesh up and running. We will cover the problems I ran into along the way. The goal is to follow a GitOps-friendly approach, with no need for the Cilium CLI. We will use Helm and kubectl for the setup.
In today's blog post, we will demonstrate an easy way of deploying and controlling Cilium on an EKS cluster with Sveltos.
Most documentation shows step-by-step installation using Helm chart commands. So, we chose to show a different way: the GitOps approach. We will use the Sveltos ClusterProfile CRD (Custom Resource Definition) for this.
