Communication diagram

From browser to pod and back

Watch a request travel through every layer of the stack in real time — from your browser to the Open Telekom Cloud edge, into the ELB, hopping a cluster node, and finally landing on one of three nginx pods. Animation runs live; nothing pre-rendered.

1. Live data plane — three patterns at once

Three different ways requests reach pods on this cluster, all running in parallel right now. Each lane is one ELB on Swiss OTC; the coloured dots are continuously-animated request packets travelling along each path. Hover over the boxes to see what they do.

GATEWAY API · L7 routing DIRECT LOADBALANCER · 1:1 SHARED LOADBALANCER · 2 listeners 🌐 Browser curl / Chrome ☁️ OTC Edge DNS + BGP 🛡️ ELB 1 + Envoy gateway.wolfslight.cc L7 · HTTPRoute 📦 ui / Kong static HTTPRoute: / 📦 echo (v1) · JSON HTTPRoute: /demo, /features/* 📦 echo-v2 · JSON canary / header / mirror 🛡️ ELB 2 · L4 lb.wolfslight.cc 📦 hello pod (nginx) Service: lb-demo/hello no L7 routing 🛡️ ELB 3 · 2 listeners whoami.wolfslight.cc listener :80 listener :8080 📦 whoami · 3 replicas 📦 fortune · 2 replicas
Gateway-API path · 1 ELB, L7 routing to many backends Direct LoadBalancer · 1 ELB  →  1 backend pod Shared LoadBalancer · 1 ELB with 2 listeners  →  2 apps
Live numbers — Gateway: 138.124.232.181 ·  Direct: 138.124.232.123 ·  Shared: 138.124.232.13 (:80 + :8080)

2. Step by step, request then response

What actually happens at each box, in milliseconds. Numbers point forward; points back.

browserYou hit the URL

curl http://whoami.wolfslight.cc/. Browser asks Cloudflare DNS for the A-record → 138.124.232.13, opens a TCP socket to port 80, writes GET / HTTP/1.1 with Host: whoami.wolfslight.cc.

internetOTC anycast edge accepts the connection

The EIP is BGP-advertised from Open Telekom Cloud's Swiss-region edge routers. Your packet enters the OTC network and is routed to the ELB instance bound to this EIP.

elbL4 LoadBalancer picks a backend

The ELB is operating at OSI layer 4 (TCP). It sees a healthy backend pool of 3 cluster nodes (NodePort :32666), picks one with simple least-connections, and forwards the connection.

nodeNodePort + kube-proxy DNAT to a pod

The connection arrives on the node's NodePort. iptables rules installed by kube-proxy DNAT it to one of the 3 pod IPs at random — that's the in-cluster load-balancing step that runs after the ELB.

podnginx serves the HTML

The pod's nginx reads the prepared /usr/share/nginx/html/index.html — already templated at startup with this pod's hostname and HSL hue — and writes 200 OK back on the same socket.

podResponse leaves the pod

nginx writes the body. Linux's network stack forwards it via the pod's veth pair through the CNI bridge back to the node.

nodeSNAT reverses the addresses

kube-proxy's conntrack entries reverse the DNAT. The source IP appears as the node's, then as the ELB's, then as the EIP — depending on where you inspect it.

elbELB writes back on the same TCP connection

Layer-4 means the ELB doesn't terminate TLS or rewrite HTTP — it just streams bytes back on the established TCP connection.

browserBrowser renders the page

~30–80 ms after the click, depending on RTT. The page itself runs a tiny JS that picks an HSL hue per pod-hostname, so reloads visually cycle through replicas.

3. Two paths through the cluster

Both end up calling the same OTC ELB API. They differ in who creates the Service that triggers it, and in what runs between the ELB and your pod.

Direct LoadBalancer Service

You write the Service. 4 hops to the pod.

  1. Browser → OTC Edge
  2. OTC Edge → ELB k8s-whoami
  3. ELB → Node NodePort
  4. Node → Pod (kube-proxy DNAT)

No L7 routing. No path/header matching. One ELB per Service.

→ http://lb.wolfslight.cc/

Gateway API path

Envoy Gateway writes the Service for you. 5 hops to the pod.

  1. Browser → OTC Edge
  2. OTC Edge → ELB (auto-created by Envoy Gateway)
  3. ELB → Node NodePort
  4. Node → Envoy proxy pod
  5. Envoy → matched backend pod (HTTPRoute logic)

L7 routing: path, host, header, method, traffic split, mirroring, URLRewrite, RequestMirror, cross-ns ReferenceGrant. Plus GRPCRoute (gRPC), TCPRoute (raw TCP :6379), TLS termination at Envoy with 4 per-hostname Let's Encrypt certs, HTTP→HTTPS 301 redirect, Coraza WAF, rate-limit, CORS, Basic Auth, security headers. One ELB serves all of it.

→ https://gateway.wolfslight.cc/

4. Three live ELBs on this cluster, right now

Every box below is a real, OTC-provisioned ELB with its own EIP. Each entered the cluster as a single kubectl apply — the CCM did the rest.

Gateway API path

One Envoy data plane behind one ELB. Six listeners on Gateway eg: HTTP :80 (HTTPS-redirect only), 4× HTTPS :443 (SNI: gateway, v1.features, v2.features, grpc), 1× TCP :6379 (Redis). Eight Routes total — 6× HTTPRoute + 1× GRPCRoute + 1× TCPRoute.

gateway.wolfslight.cc ↗

Direct LoadBalancer Demo

One Service: type=LoadBalancer in front of 2 nginx replicas. Static landing page, no L7 routing.

lb.wolfslight.cc ↗

whoami (3 replicas, colour-coded)

Same Service pattern as Direct LB, but with 3 replicas — each picks a unique HSL hue from its hostname. Reload to (probably) meet a different pod.

whoami.wolfslight.cc ↗