A Kubernetes object, an automatic LoadBalancer

One type: LoadBalancer Service
→ one public ELB

Drop an annotated Service into the cluster and watch Swiss OTC do the rest. Two patterns — autocreate for a brand-new ELB+EIP, or elb.id to attach another listener to an ELB that already exists. Both are running live on this cluster right now.

1. What happens when you kubectl apply

Seven steps. Four actors: you, the K8s api-server, the OTC cloud-controller (hws-cloudprovider), and the OTC ELB API itself.

youApply the manifest

kubectl apply -f lb-demo.yaml — sends a Service object with type: LoadBalancer and a kubernetes.io/elb.autocreate annotation to the API server.

api-serverStores the Service

etcd holds the new Service. .status.loadBalancer.ingress is empty — there's no public address yet.

ccmSees the Service event

The OTC cloud-controller (hws-cloudprovider) watches Services cluster-wide. It spots type: LoadBalancer and reads the elb.* annotations to figure out what to provision.

ccmotc-apiCalls the ELB and EIP APIs

One API call creates the ELB instance, another creates the EIP. The EIP is bound to the ELB. A listener on the Service's port is added, pointing to a backend pool of cluster nodes.

ccmWrites back IDs as annotations

The CCM updates the Service object with kubernetes.io/elb.id and kubernetes.io/elb.eip-id. These become the link between the K8s object and the OTC resources.

ccmSets .status.loadBalancer.ingress

The public EIP appears in .status.loadBalancer.ingress[0].ip. kubectl get svc now shows it under EXTERNAL-IP. Anything else in the cluster that watches Services (kube-proxy, the Gateway controller, etc.) reacts.

otc-fabricBGP advertises the EIP

A few seconds later the EIP becomes routable on the public internet. Traffic to the EIP hits the ELB → a healthy node → the pod via kube-proxy.

2. Two patterns: create a new ELB or reuse an existing one

The Swiss OTC CCM honours two distinct annotations for shaping the ELB. Pick one per Service. With autocreate you get a brand-new ELB+EIP per Service; with elb.id you attach another listener to an ELB that already exists. Both patterns are alive on this cluster right now.

Pattern A · autocreate (new ELB) Demo: lb-demo / hello
apiVersion: v1
kind: Service
metadata:
  name: hello
  namespace: lb-demo
  annotations:
    kubernetes.io/elb.class: "union"
    kubernetes.io/elb.autocreate: |
      { "type":      "public",
        "name":      "k8s-lb-demo",
        "bandwidth_name":       "k8s-lb-demo-bw",
        "bandwidth_chargemode": "traffic",
        "bandwidth_size":       5,
        "bandwidth_sharetype":  "PER",
        "eip_type":             "5_bgp" }
spec:
  type: LoadBalancer
  selector: { app: hello }
  ports:
    - { port: 80, targetPort: http }
# Result: EIP 138.124.232.123, brand-new ELB k8s-lb-demo
CCM action: +create ELB k8s-lb-demo +create EIP, attach to ELB +add listener :80 +write back elb.id and elb.eip-id annotations on the Service.
Pattern B · elb.id reference (share existing ELB) Demo: whoami / fortune
apiVersion: v1
kind: Service
metadata:
  name: fortune
  namespace: whoami
  annotations:
    kubernetes.io/elb.class: "union"
    kubernetes.io/elb.id: "71f02bde-...d43271"
    # NO autocreate — we're pointing at an existing ELB.
    # The UUID comes from the whoami Service the CCM
    # wrote back when whoami used Pattern A.
spec:
  type: LoadBalancer
  selector: { app: fortune }
  ports:
    # Different port → no listener conflict
    - { port: 8080, targetPort: http }
# Result: SAME EIP 138.124.232.13 as whoami, new listener :8080
CCM action: ~look up ELB by ID (no create) ~EIP unchanged +add listener :8080 to the existing ELB ~backend pool gets a second member group.

The elb.id UUID is normally read from the Service that already created the ELB — the repo's scripts/72-fortune-demo.sh resolves it dynamically with kubectl get svc whoami -o jsonpath='{.metadata.annotations.kubernetes\.io/elb\.id}' and substitutes it into the manifest before kubectl apply. Hardcoding the UUID would couple the manifest to one specific cluster.

3. The annotations that matter

The OTC CCM honours these kubernetes.io/elb.* annotations. Without elb.autocreate (or elb.id), the CCM skips the Service with "service annotation… is not defined, skip".

AnnotationRoleField
kubernetes.io/elb.class "union" = shared ELB (cheap, dev). "performance" = dedicated ELB (prod tier, needs AZ + flavour). required
kubernetes.io/elb.autocreate JSON blob describing the ELB + EIP to create. Without it the CCM skips. Schema is strictbandwidth_name is required, missing fields return elb autoCreate field:[X] is invalid. required
for autocreate
kubernetes.io/elb.id Use an EXISTING ELB instead of creating one. Mutually exclusive with autocreate. The CCM also writes this back after autocreate so future reconciles bind to the same ELB. alternative
kubernetes.io/elb.eip-id EIP UUID. Written back by the CCM after EIP attachment — readable for cross-referencing with the OTC console. written back
kubernetes.io/elb.pass-through When onlyLocal, ELB → pod traffic bypasses kube-proxy SNAT. Envoy Gateway sets this by default. optional

4. Run the demos

The repo ships two scripts — one for each pattern. Both apply the manifest, poll until the ELB has an IP, print the CCM events, and curl the result.

Pattern A · autocreate (new ELB)

./scripts/70-lb-demo.sh

Expected output (abridged):

### 1/4 Manifest anwenden namespace/lb-demo created · service/hello created ### 3/4 Live-Watch der EXTERNAL-IP [ 1/60] EXTERNAL-IP = <pending> (CCM creates ELB...) ✓ EXTERNAL-IP: 138.124.232.123 # brand-new EIP ### 4/4 Details "kubernetes.io/elb.id": "056c192d-..." # written back by CCM "kubernetes.io/elb.eip-id": "83cc9ef5-..." EnsuredLoadBalancer

Tear it down (the CCM deletes the ELB and EIP too):

./scripts/70-lb-demo.sh --delete

Pattern B · elb.id (share existing ELB)

Prerequisite: kubectl apply -f manifests/whoami-demo.yaml first (creates the ELB that fortune will attach to).

./scripts/72-fortune-demo.sh

Expected output (abridged):

### 1/3 whoami-ELB-UUID ermitteln ✓ whoami-ELB: 71f02bde-ed6f-4d38-bc03-685b60d43271 ### 2/3 Manifest rendern + applien ==> Ersetze __ELB_ID__ → 71f02bde-... service/fortune created ### 3/3 Warten bis CCM den zweiten Listener angehängt hat ✓ Fortune Service hat IP: 138.124.232.13 # SAME as whoami ✓ → SELBE IP wie whoami! Beide Services teilen einen ELB.

Tear it down (removes only the listener — the ELB stays alive for whoami):

./scripts/72-fortune-demo.sh --delete

The script's clever bit is line 5: kubectl get svc whoami -n whoami -o jsonpath='{.metadata.annotations.kubernetes\.io/elb\.id}' — it reads the UUID that the CCM wrote back on whoami's Service after Pattern A ran, then seds it into the fortune manifest before kubectl apply. Same trick scales to any number of apps sharing one ELB.

5. Direct LB vs Gateway API path

Both end up with an OTC ELB + EIP. The difference is who creates the Service and what sits between the ELB and your pod.

Direct LoadBalancer Service

You write the Service yourself. Traffic path:

  • OTC ELB :80
  • kube-proxy NodePort
  • Pod

Good for: one app, one IP, no L7 routing logic.

Limits: one ELB per Service. No path-/host-/header-based routing. No traffic splitting.

Gateway API path

You write a Gateway + HTTPRoute. The Gateway controller (Envoy Gateway) internally creates a Service of type LoadBalancer to expose itself. Traffic path:

  • OTC ELB :80
  • kube-proxy NodePort
  • Envoy data-plane pod
  • HTTPRoute match → backend pod

Good for: many apps sharing one ELB, path-/host-based routing, canary, header filters, gRPC, TLS termination.

The ELB mechanism is identical — Envoy Gateway just automates the step you took manually here.