1. Live data plane — three patterns at once
Three different ways requests reach pods on this cluster, all running in parallel right now. Each lane is one ELB on Swiss OTC; the coloured dots are continuously-animated request packets travelling along each path. Hover over the boxes to see what they do.
2. Step by step, request then response
What actually happens at each box, in milliseconds. Numbers point forward; ↩ points back.
browserYou hit the URL
curl http://whoami.wolfslight.cc/. Browser asks Cloudflare DNS for the A-record → 138.124.232.13, opens a TCP socket to port 80, writes GET / HTTP/1.1 with Host: whoami.wolfslight.cc.
internetOTC anycast edge accepts the connection
The EIP is BGP-advertised from Open Telekom Cloud's Swiss-region edge routers. Your packet enters the OTC network and is routed to the ELB instance bound to this EIP.
elbL4 LoadBalancer picks a backend
The ELB is operating at OSI layer 4 (TCP). It sees a healthy backend pool of 3 cluster nodes (NodePort :32666), picks one with simple least-connections, and forwards the connection.
nodeNodePort + kube-proxy DNAT to a pod
The connection arrives on the node's NodePort. iptables rules installed by kube-proxy DNAT it to one of the 3 pod IPs at random — that's the in-cluster load-balancing step that runs after the ELB.
podnginx serves the HTML
The pod's nginx reads the prepared /usr/share/nginx/html/index.html — already templated at startup with this pod's hostname and HSL hue — and writes 200 OK back on the same socket.
podResponse leaves the pod
nginx writes the body. Linux's network stack forwards it via the pod's veth pair through the CNI bridge back to the node.
nodeSNAT reverses the addresses
kube-proxy's conntrack entries reverse the DNAT. The source IP appears as the node's, then as the ELB's, then as the EIP — depending on where you inspect it.
elbELB writes back on the same TCP connection
Layer-4 means the ELB doesn't terminate TLS or rewrite HTTP — it just streams bytes back on the established TCP connection.
browserBrowser renders the page
~30–80 ms after the click, depending on RTT. The page itself runs a tiny JS that picks an HSL hue per pod-hostname, so reloads visually cycle through replicas.
3. Two paths through the cluster
Both end up calling the same OTC ELB API. They differ in who creates the Service that triggers it, and in what runs between the ELB and your pod.
Direct LoadBalancer Service
You write the Service. 4 hops to the pod.
- Browser → OTC Edge
- OTC Edge → ELB
k8s-whoami - ELB → Node NodePort
- Node → Pod (kube-proxy DNAT)
No L7 routing. No path/header matching. One ELB per Service.
→ http://lb.wolfslight.cc/Gateway API path
Envoy Gateway writes the Service for you. 5 hops to the pod.
- Browser → OTC Edge
- OTC Edge → ELB (auto-created by Envoy Gateway)
- ELB → Node NodePort
- Node → Envoy proxy pod
- Envoy → matched backend pod (HTTPRoute logic)
L7 routing: path, host, header, method, traffic split, mirroring, URLRewrite, RequestMirror, cross-ns ReferenceGrant. Plus GRPCRoute (gRPC), TCPRoute (raw TCP :6379), TLS termination at Envoy with 4 per-hostname Let's Encrypt certs, HTTP→HTTPS 301 redirect, Coraza WAF, rate-limit, CORS, Basic Auth, security headers. One ELB serves all of it.
→ https://gateway.wolfslight.cc/4. Three live ELBs on this cluster, right now
Every box below is a real, OTC-provisioned ELB with its own EIP.
Each entered the cluster as a single kubectl apply —
the CCM did the rest.
Gateway API path
One Envoy data plane behind one ELB. Six listeners on Gateway eg: HTTP :80 (HTTPS-redirect only), 4× HTTPS :443 (SNI: gateway, v1.features, v2.features, grpc), 1× TCP :6379 (Redis). Eight Routes total — 6× HTTPRoute + 1× GRPCRoute + 1× TCPRoute.
Direct LoadBalancer Demo
One Service: type=LoadBalancer in front of 2 nginx replicas. Static landing page, no L7 routing.
whoami (3 replicas, colour-coded)
Same Service pattern as Direct LB, but with 3 replicas — each picks a unique HSL hue from its hostname. Reload to (probably) meet a different pod.
whoami.wolfslight.cc ↗