This model illustrates the API Management and Gateway Control Architecture used to manage, secure, and expose microservices running on a Kubernetes cluster. It shows the interaction between human roles, Kong’s management components, and the underlying runtime platform.
At the top of the model, Developer and Product Manager roles interact with the system through two different interfaces:
• Kong OSS Client (developer-friendly CLI/tooling)
• Kong Manager UI (browser-based administrative console)
Both of these interfaces communicate with the Kong Manager API, which serves as the authoritative control plane for configuring Kong Gateway.
The Kong Manager component represents the active API gateway runtime that enforces policies—such as routing, rate limiting, authentication, and observability—over all service traffic. Downstream, it directs and governs access to the Microservices, which are deployed within a Kubernetes v1.35 cluster. Kubernetes provides orchestration, scaling, and operational control for both the gateway and microservices.
________________________________________
Purpose of the Model
The purpose of this model is to provide a high-level architectural view of how API governance, configuration management, and service exposure are achieved in a microservices ecosystem. Specifically, it illustrates:
1. How stakeholders interact with the API platform
Developers and product managers have distinct access paths to manage API configurations, policies, and service definitions.
2. How Kong’s management plane is structured
The OSS Client and Manager UI both rely on the central Manager API, which ensures consistent, validated configuration changes.
3. How Kong Gateway governs and protects microservices
Kong Manager (gateway runtime) sits between clients and backend services, enforcing security and operational policies.
4. How Kubernetes serves as the hosting and orchestration environment
Kubernetes provides the underlying compute, networking, and scaling foundation for all services in the architecture.
Overall, the model shows an end-to-end pathway from human configuration actions to secure, policy-governed service traffic within a cloud-native microservices platform.