The kubriX Architecture
The kubriX Stack at a Glance
kubriX is not a monolithic platform. Instead, it's a curated collection of best-in-class open-source tools that work together as an integrated Internal Developer Platform.
Think of it as: Kubernetes + the 15 essential platform tools you need - pre-integrated, tested together, and deployable as one.
Main Components
kubriX orchestrates these production-proven components:
- ArgoCD (GitOps-Engine)
- Kargo (GitOps-Promotion)
- Ingress NGINX (Ingress controller)
- Keycloak (IAM)
- Vault and ESO (Secrets Management)
- Kyverno (Policy Management)
- Grafana LGTM (Observability)
- KubeCost (Cost Management)
- Backstage (Developer Portal)
- Velero (Backup & Recovery)
- ... and many more
Why These Tools?
We didn't just throw tools together. Each component was selected because:
Production-Proven - Used by thousands of companies in production
Cloud-Native Foundation - CNCF graduated or incubating projects
Active Community - Actively maintained with strong ecosystems
Kubernetes-Native - Built for Kubernetes, not bolted on
Replaceable - Don't like a component? Swap it out
How It All Fits Together
kubriX organizes platform functionality into four core pillars that address the operational requirements of running Kubernetes at scale:

ENABLE
Self-service layer for development teams:
- Developer Portal - Centralized interface for APIs, documentation, and workflows
- Service Catalog - Browse and discover available services and templates
- Golden Paths - Standardized deployment patterns that encode organizational best practices
- Template System - Pre-configured templates for common application types
DELIVER
GitOps-based deployment and lifecycle management:
- GitOps Engine - Declarative, Git-driven continuous deployment
- Multi-Stage Promotion - Controlled progression across environments (dev → staging → production)
- Application Orchestration - Automated rollouts, rollbacks, and health checks
- Virtualization Support - Run VMs alongside containers when legacy workloads require it
SECURE
Identity, policy, and secrets management:
- Identity Provider - Centralized authentication and Single Sign-On (SSO)
- Authorization - Role-based access control (RBAC) and fine-grained permissions
- Policy Engine - Admission control, compliance validation, and governance rules
- Secrets Management - Centralized secret storage with automatic injection into workloads
- Security Scanning - Vulnerability detection integrated into deployment pipelines
OBSERVE
Operational visibility and cost tracking:
- Log Aggregation - Centralized collection and search across all clusters
- Metrics Collection - Time-series data for performance monitoring and alerting
- Distributed Tracing - Request flow tracking across microservices
- Cost Analytics - Resource usage and cost tracking per team, namespace, or application
- Unified Dashboards - Single pane of glass for platform and application health
Deployment Model
kubriX supports both single-cluster and multi-cluster deployments through a hub-and-spoke architecture:

Hub Cluster (Management)
The hub runs centralized platform services:
- GitOps controllers (ArgoCD) for managing spoke deployments
- Shared authentication and policy services
- Cross-cluster observability aggregation
- Template and configuration repository
Spoke Clusters (Workload)
Spokes run application workloads with platform services deployed locally:
- Teams deploy applications through self-service interfaces
- Platform capabilities (secure, observe) are available in each cluster
- Configurations are synchronized from the hub via GitOps
GitOps Workflow
All platform and application configurations are managed declaratively through Git:
- Platform team maintains cluster configurations, policies, and templates in Git
- Application teams deploy using approved templates through portal or Git
- ArgoCD detects changes and reconciles desired state across clusters
- Policy validation and secrets injection happen automatically during deployment
- All changes are auditable through Git history
This approach ensures consistent configuration across environments and enables reliable rollbacks when needed.
Design Principles
Modular and replaceable
Each component serves a specific function and can be swapped for alternatives. Don't need LGTM? Use your already working solution instead. The component interfaces remain stable.
Kubernetes-native
Everything runs as Kubernetes resources using standard APIs. Custom Resource Definitions extend functionality where needed. Standard tooling (kubectl, helm) works as expected.
GitOps-first
All configuration is declarative and versioned in Git. The cluster state is derived from Git, not manually applied. Changes are auditable and reversible.
Production-tested
Components are selected based on production usage at scale. We use CNCF graduated and incubating projects with active communities and long-term support trajectories.