Skip to main content

Dual-Stack: Cilium Complementary Features

· 10 min read
Eleni Grosdouli
DevOps Consulting Engineer at Cisco Systems

Introduction

Welcome to part 4 of the dual-stack series! In parts 1, 2, and 3, we walked through how to set up dual-stack networking on a Proxmox server using our Internet Provider. We also showed you how to deploy RKE2 Kubernetes clusters and share both IPv4 and IPv6 services across them. Now, in the final part of the series, we are diving into some of the most commonly used features of Cilium for a home lab setup! Let’s get started!

title image reading "Proxmox Rancher and Cilium"

note

To use the latest Cilium features, I decided to update the RKE2 cluster to version v1.29.15+rke2r1 and Cilium version v1.17.1.

To upgrade RKE2 clusters, take a look here.

Lab Setup

Infrastructure

+----------------------------+-------------------------+
| Deployment | Version |
+----------------------------+-------------------------+
| Proxmox VE | 8.2.4 |
| RKE2 | v1.29.15+rke2r1 |
| Cilium | 1.17.1 |
+----------------------------+-------------------------+

GitHub Resources

The showcase repository is available here.

Prerequisites

Go through parts 1 and 2 of the series and ensure any prerequisites are met. If the preparation is taken care of, two dual-stack RKE2 clusters powered with Cilium will be ready.

Scenario

As mentioned, we would like to enhance the existing setup done in previous posts and introduce additional powerful Cilium features! What does that even mean? Was Cilium not good enough as both a CNI and a service mesh?

You guessed it right, there is more! I would like to use the Cilium Gateway API instead of the RKE2 Nginx Ingress Controller, as it gives me way more options around filtering on L7, routing and security. Next, I would like to use the Cilium LB IPAM feature (enabled by default) to assign IP addresses to LoadBalancer services and ultimately, advertise IPv4 and IPv6 IPs over my local network without the need for a BGP (Border Gateway Protocol) capable machine.

Ready to continue? Let's head into it.

title image reading "Bob Sponge Kubernetes"

Source

Enable Features

We would update the existing Cilium configuration to include L2 Announcements and Gateway API support. The update is super easy, as we need to update the existing Cilium Helm chart and include the additional details. I prefer to first extract the current Cilium config values into a file, include the new ones and afterwards perform a Helm chart upgrade. The upgrade approach of a Helm chart is completely up to you! 😊

Cilium Configuration - L2 Announcements and Gateway API

Locate and update the Cilium Helm chart.

  1. Get a copy of the kubeconfig
  2. export KUBECONFIG=<directory of the kubeconfig>
  3. helm list -n kube-system
  4. helm get values rke2-cilium -n kube-system -o yaml > values_cluster01.yaml

Open, update, and save the values_cluster01.yaml file with the details below.

externalIPs:
enabled: true
gatewayAPI:
enabled: true
k8sClientRateLimit:
burst: 40 # Important value when many services run on a Kubernetes cluster. Check out the documentation https://docs.cilium.io/en/v1.17/network/l2-announcements/#sizing-client-rate-limit
qps: 20 # Important value when many services run on a Kubernetes cluster. Check out the documentation https://docs.cilium.io/en/v1.17/network/l2-announcements/#sizing-client-rate-limit
kubeProxyReplacement: true
l2announcements:
enabled: true

The final step is to reapply the Helm Chart with the changes performed.

$ helm upgrade rke2-cilium rke2-charts/rke2-cilium --version 1.17.1 --namespace kube-system -f values_cluster01.yaml

Gateway API Specific

Apart from the Gateway API pre-requisites found here, we need to apply the CRDs (Custom Resource Definition) to the Kubernetes cluster. To apply the CRDs, use the commands below.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gateways.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_httproutes.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_referencegrants.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_grpcroutes.yaml

Validation

There might be a need to restart the Cilium daemonset to enable the newly added features.

$ kubectl rollout restart ds/cilium -n kube-system
$ kubectl get crd | grep -E "gateway|http"
$ kubectl exec -it ds/cilium -n kube-system -- cilium status --verbose | grep -i "l2-announcer"

Enable IPv4 and IPv6 Pools

As the LB IPAM capabilities are enabled by default, there is no prerequisite to create IPv4 and IPv6 CiliumLoadBalancerIPPool resources. For this demonstration, I decided to create two separate pools, one for IPv4 and one for IPv6 services. The first pool will be used to expose ArgoCD using the Gateway API, while the second pool will be used for an Nginx application.

tip

For advanced use cases, we can use serviceSelectors and matchLabels during the pool definition and specify which pool can be assigned to which service. To assign multiple IPs to a single service, use the lbipam.cilium.io/ips annotation. For more information, check out the official documentation.

IPv4 Pool

---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
name: "ipv4-pool"
spec:
blocks:
- start: "10.10.20.10"
stop: "10.10.20.20"
$ kubectl apply -f ipv4_pool.yaml
$ kubectl get ippools
NAME DISABLED CONFLICTING IPS AVAILABLE AGE
ipv4-pool false False 9 7d3h

IPv6 Pool

---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
name: "ipv6-pool"
spec:
blocks:
- cidr: "2004::0/64"
$ kubectl apply -f ipv6_pool.yaml
$ kubectl get ippools
NAME DISABLED CONFLICTING IPS AVAILABLE AGE
ipv6-pool false False 18446744073709551614 7d

CiliumL2AnnouncementPolicy

To announce the IPs to the local network, we would need to create a CiliumL2AnnouncementPolicy resource. The IPs will be announced from the network interface of a node with the name eth0. If the interface name in your setup is different, modify the file as needed.

---
apiVersion: cilium.io/v2alpha1
kind: CiliumL2AnnouncementPolicy
metadata:
name: l2-announcement-policy
namespace: kube-system
spec:
interfaces:
- eth0
externalIPs: true
loadBalancerIPs: true
$ kubectl apply -f l2_announ.yaml
$ kubectl get CiliumL2AnnouncementPolicy
NAME AGE
l2-announcement-policy 7d
note

For this setup, we have an interest in LoadBalancer IPs and not so much in the externalIPs. However, both options are enabled.

Test L2 Annoucements

IPv4

Let's deploy an Nginx application and expose it via a LoadBalancer service. Simply apply the yaml below.

---
apiVersion: v1
kind: ConfigMap
metadata:
name: html-message
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Welcome</title>
</head>
<body>
<h1>Hello from IPv4!</h1>
</body>
</html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ipv4
spec:
replicas: 2
selector:
matchLabels:
app: nginx-ipv4
template:
metadata:
labels:
app: nginx-ipv4
spec:
containers:
- name: nginx-ipv4
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: html-volume
mountPath: /usr/share/nginx/html/index.html
subPath: index.html
volumes:
- name: html-volume
configMap:
name: html-message
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-ipv4
spec:
type: LoadBalancer
selector:
app: nginx-ipv4
ports:
- port: 80
targetPort: 80

Validation

$ kubectl get pods,svc | grep -i nginx
pod/nginx-ipv4-54b647476f-jczlq 1/1 Running 0 111s
pod/nginx-ipv4-54b647476f-mkmsb 1/1 Running 0 111s
service/nginx-service LoadBalancer 10.45.48.226 10.10.20.12 80:32112/TCP 2m20s

From another device in the same local network, try and CURL the Nginx LoadBalancer service.

$ ip ad | grep -i ens20
4: ens20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 10.10.20.101/24 brd 10.10.20.255 scope global dynamic noprefixroute ens20

$ curl http://10.10.20.12
<!DOCTYPE html>
<html>
<head>
<title>Welcome</title>
</head>
<body>
<h1>Hello from IPv4!</h1>
</body>
</html>

IPv6

Use the yaml above, update the names to something else and ensure the service gets an IPv6 LoadBalancer address from the pool.

---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
ipFamilies:
- IPv6
ipFamilyPolicy: SingleStack
ports:
- port: 80
targetPort: 80

Validation

$ kubectl get pods,svc | grep -i nginx
pod/nginx-deployment-6b4bdcdb4c-l42v9 1/1 Running 3 (110m ago) 6d5h
pod/nginx-deployment-6b4bdcdb4c-z4v47 1/1 Running 3 (110m ago) 6d5h
service/nginx-service LoadBalancer 2001:face:43::ce0e 2004::1 80:31389/TCP 6d5h

To access the service from another device on the local network, we need to add an IPv6 route for the 2004::/64 subnet. This route should point to the IPv6 Gateway. Effectively, we are telling the device how to reach the destination by defining the path. Once the route is set up correctly, we will be able CURL the Nginx LoadBalancer service.

$ ip -6 route add 2004::/64 via 2001:9e8:xxxx:xxxx:: dev ens20 # 2001:9e8:xxxx:xxxx:: is the IPv6 Gateway
$ ip -6 route show | grep -i 2004
2004::/64 via 2001:9e8:xxxx:xxxx:: dev ens20 metric 1024 pref medium

$ curl http://[2004::1]
<!DOCTYPE html>
<html>
<head>
<title>Welcome</title>
</head>
<body>
<h1>Hello from IPv6!</h1>
</body>
</html>
note

For L2 Announcements to work, we need to target machines that can answer ARP and NDP requests for specific service IPs. You can use Hubble or Wireshark to capture traffic for a specific protocol.

Cilium, Gateway API and ArgoCD

The deployment outlines how the LB IPAM setup alongside the Cilium Gateway API work for the ArgoCD installation.

Install ArgoCD

$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

ArgoCD TLS Secret

Use Openssl or a utility of your preference to create a valid TLS certificate for the ArgoCD deployment. Once done, create a Kubernetes TLS Secret.

$ kubectl create secret tls argocd-server-tls -n argocd --key=argocd-key.pem --cert=argocd.example.com.pem

Gateway Resource

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: argocd
namespace: argocd
spec:
gatewayClassName: cilium
listeners:
- hostname: argocd.example.com
name: argocd-example-com-http
port: 80
protocol: HTTP
- hostname: argocd.example.com
name: argocd-example-com-https
port: 443
protocol: HTTPS
tls:
certificateRefs:
- kind: Secret
name: argocd-server-tls

HTTPRoute Resource

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
creationTimestamp: null
name: argocd
namespace: argocd
spec:
hostnames:
- argocd.example.com
parentRefs:
- name: argocd
rules:
- backendRefs:
- name: argocd-server
port: 80
matches:
- path:
type: PathPrefix
value: /
status:
parents: []

Validation

$ kubectl get gateway,httproute -n argocd
NAME CLASS ADDRESS PROGRAMMED AGE
gateway.gateway.networking.k8s.io/argocd cilium 10.10.20.11 True 7d2h

NAME HOSTNAMES AGE
httproute.gateway.networking.k8s.io/argocd ["argocd.example.com"] 7d2h

Now that the resources are deployed, we can access the ArgoCD deployment from the hostname argocd.example.com. To test the setup, SSH to another machine in your local network, create a new entry in the /etc/hosts file and map the domain to the IP address 10.10.20.11. You are ready to CURL!

$ curl -ki https://argocd.example.com
HTTP/1.1 200 OK
accept-ranges: bytes
content-length: 788
content-security-policy: frame-ancestors 'self';
content-type: text/html; charset=utf-8
vary: Accept-Encoding
x-frame-options: sameorigin
x-xss-protection: 1
date: Mon, 12 May 2025 11:02:14 GMT
x-envoy-upstream-service-time: 0
server: envoy

<!doctype html><html lang="en"><head><meta charset="UTF-8"><title>Argo CD</title><base href="/"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" type="image/png" href="assets/favicon/favicon-32x32.png" sizes="32x32"/><link rel="icon" type="image/png" href="assets/favicon/favicon-16x16.png" sizes="16x16"/><link href="assets/fonts.css" rel="stylesheet"><script defer="defer" src="main.67d3d35d60308e91d5f4.js"></script></head><body><noscript><p>Your browser does not support JavaScript. Please enable JavaScript to view the site. Alternatively, Argo CD can be used with the <a href="https://argoproj.github.io/argo-cd/cli_installation/">Argo CD CLI</a>.</p></noscript><div id="app"></div></body><script defer="defer" src="extensions.js"></script></html>
note

If a 307 Temporary Redirect HTTP Status code is returned, modify the argocd-cmd-params-cm ConfigMap in the argocd namespace and define the server.insecure: “true” below the data field. For more information, have a look here.

Conclusion

In today's posts we covered some of the most commonly used Cilium features suitable for a home lab setup! For advanced use cases, I would encourage you to have a look at the Cilium official documentation!

Resources

✉️ Contact

If you have any questions, feel free to get in touch! You can use the Discussions option found here or reach out to me on any of the social media platforms provided. 😊 We look forward to hearing from you!