Cloud Native 11 min read

Understanding Knative Eventing: Broker & Trigger Architecture and Implementation

This article provides a comprehensive overview of Knative Eventing's Broker and Trigger model, detailing background concepts, event routing patterns, data‑plane and control‑plane workflows, and includes practical YAML and command‑line examples for deploying and managing the components in a Kubernetes environment.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Understanding Knative Eventing: Broker & Trigger Architecture and Implementation

Knative consists of two main sub‑projects, Serving and Eventing . The article focuses on the Eventing side, introducing several event‑routing patterns such as simple Source→Service binding, channel‑based subscriptions, parallel and sequential processing, and the Broker & Trigger model, which is the primary subject of the guide.

The Broker & Trigger model uses CloudEvents as the event format. An event source (e.g., GitHub, Heartbeats, k8s, ContainerSource) creates an event of a specific type, which is sent to a Broker . Multiple Trigger objects can bind to the same broker, each applying a filter (e.g., type=foo ) to decide whether the event should be forwarded to its associated consumer service.

In the data plane, the flow is:

# kubectl get broker
NAME      URL                                                                 AGE   READY  REASON
default   http://broker-ingress.knative-eventing.svc.cluster.local/default/default   46h   True

The broker can be created manually or automatically via namespace injection. The broker’s address is exposed through broker‑ingress , which forwards events to a channel (often a NatssChannel ) managed by the natss‑ch‑dispatcher . The dispatcher publishes messages to NATS‑Streaming and also watches the channel for subscribers.

Triggers watch the broker and, based on their filter attributes, create Subscription resources that link the channel to a broker‑filter service. The filter service evaluates the event and forwards it to the subscriber’s URI if the filter matches. Replies from consumers can be routed back through the broker, completing the event loop.

The control plane consists of several controllers:

mt‑broker‑controller watches Broker creation, creates the corresponding Channel (e.g., NatssChannel ), and updates broker status with channel addresses.

Natss‑ch‑controller watches NatssChannel , updates the NATS‑Streaming service status, and creates an ExternalName service pointing to natss‑ch‑dispatcher .

eventing‑controller watches Subscription objects, resolves subscriber and reply URIs, and updates the underlying channel with subscriber URLs.

These controllers coordinate to ensure that when a Trigger is created, a corresponding subscription is generated, linking the broker’s channel to the appropriate filter and consumer services.

Finally, the article provides a set of YAML manifests for the broker, triggers, and services, allowing readers to reproduce the example environment:

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  name: default
  namespace: default
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: trigger1
spec:
  filter:
    attributes:
      type: foo
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: service1

By following these steps, users can set up a Knative Eventing pipeline that demonstrates event generation, filtering, delivery, and optional reply handling within a Kubernetes cluster.

cloud-nativeserverlessKubernetesBrokerTriggerKnativeEventing
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.