Skip to main content

Why Platform Teams Should Not Spend Days Integrating Security Patches

· 4 min read
Johannes Kleinlercher
kubriX Dev, platform engineer, systems architect

Modern platform engineering teams rely on an increasingly complex open-source ecosystem.

A production-grade Kubernetes platform commonly includes:

  • GitOps tooling
  • ingress controllers
  • observability stacks
  • policy engines
  • secret management
  • CI/CD orchestration
  • infrastructure controllers
  • identity integrations

Every one of these components continuously ships updates, feature releases, and, most importantly, security fixes.

The challenge is no longer applying patches.

The challenge is validating that the entire platform still works afterwards.

The hidden cost of DIY platform engineering

When a critical CVE appears, internal platform teams suddenly need to:

  • identify affected components
  • evaluate compatible upstream versions
  • investigate breaking changes
  • run integration tests
  • execute end-to-end validation
  • prepare rollback procedures
  • coordinate rollout windows

This work often takes days.

Sometimes weeks.

And during that time, the platform remains exposed.

Many organizations underestimate how much engineering capacity is permanently consumed by maintaining and securing a self-assembled Kubernetes platform stack.

What kubriX changes

kubriX is designed to remove exactly this operational burden.

Instead of consuming dozens of upstream projects individually, customers receive:

  • curated platform releases
  • validated component combinations
  • integration-tested upgrades
  • end-to-end tested distributions
  • documented breaking changes
  • controlled migration paths

When a security issue appears, kubriX customers do not start from scratch.

They apply the prepared security release, run their own domain-specific validation, and can restore platform security within hours instead of turning every patch into a new platform engineering project.

10 security releases in 22 days

Between April 17 and May 9, 2026, kubriX delivered:

KPIValue
Security releases shipped10
Average time between releases2.2 days
Critical CVEs fixed16+
High severity CVEs fixed90+
Platform components updated15+
Integration and E2E testedYes
Breaking changes documentedYes

The fixes covered critical platform components including:

  • Argo CD
  • Traefik
  • Kyverno
  • Crossplane
  • OpenBao
  • Grafana
  • Loki
  • ExternalDNS
  • Testkube
  • Kargo
  • Vault providers
  • monitoring stacks

These are not peripheral services.

They are core operational building blocks of modern cloud platforms.

Security is an operational scaling problem

The larger the platform landscape becomes, the harder secure operations get.

The real operational risk is not Kubernetes itself.

The risk comes from:

  • dependency fragmentation
  • incompatible upstream releases
  • unvalidated integrations
  • hidden breaking changes
  • operational coordination overhead

This is where Internal Developer Platforms create measurable value.

Not by abstracting Kubernetes alone, but by industrializing platform operations.

Breaking changes matter more than version numbers

One of the biggest operational pain points during emergency patching is uncertainty.

Can we safely upgrade?

Will this break existing workloads?

Do we need migration steps?

kubriX security releases explicitly document known breaking changes and migration considerations.

For example, in kubriX 7.0.2:

The optional second argument for freightMetadata that was deprecated in v1.8.0 has now been removed.

This kind of operational guidance reduces rollout risk and shortens recovery time.

From platform projects to operational routine

Without a curated platform distribution, every security patch becomes a mini platform engineering project.

With kubriX, security patching becomes a controlled operational routine.

That difference matters when critical vulnerabilities appear every week.

Final thoughts

Anyone can assemble open-source components.

The real challenge begins when critical CVEs arrive continuously and platform stability still needs to be guaranteed.

The value of an Internal Developer Platform is not measured on release day.

It is measured the day a critical vulnerability drops.

Preview Environments for Faster Pull Request Feedback

· 4 min read
Johannes Kleinlercher
kubriX Dev, platform engineer, systems architect

Every platform team knows the gap between "the pull request looks fine" and "this change actually works in a real environment."

Code review, CI checks, and Helm rendering are important, but they still leave one question open:

What happens when this pull request is deployed?

With preview environments in kubriX Prime, teams can now answer that question earlier. A labelled pull request can get its own temporary Argo CD application, its own namespace, and its own route before the change enters the regular GitOps promotion flow.

The problem with late feedback

Most delivery pipelines are good at validating syntax and unit-level behavior. They are less good at showing how an application behaves once all the platform pieces are involved:

  • ingress
  • certificates
  • DNS
  • Helm values
  • Kubernetes resources
  • platform policies
  • environment-specific configuration

Those issues often show up only after a change has already been merged and promoted into a shared test environment.

That is late feedback.

And late feedback tends to be expensive feedback.

How kubriX preview environments work

Preview environments use the Argo CD ApplicationSet pull request generator. When a pull request matches the configured labels, kubriX creates a temporary Argo CD application from the pull request head_sha.

The generated preview environment is isolated from the regular stages:

ResourceExample
Pull request labelpreview
Argo CD applicationmy-app-preview-pr-42
Namespaceteam1-my-app-pr-42
Deployed revisionPR head_sha

When the pull request changes, the preview environment follows the new commit. When the label is removed or the pull request is closed, Argo CD prunes the preview application again.

Designed for platform defaults and app-level control

Preview environments are configured in two layers.

Platform teams can define defaults during team onboarding:

previewEnvironmentsDefaults:
enabled: true
labels:
- preview
requeueAfterSeconds: 1800
valuesFiles:
- values.yaml
- values-preview.yaml

Application teams can override those defaults in their app-stages.yaml:

previewEnvironments:
labels:
- qa
requeueAfterSeconds: 600
valuesFiles:
- values.yaml
- values-qa.yaml

This keeps the platform in control of the standard behavior while still giving application teams enough flexibility for their own workflows.

Preview URLs that follow the pull request

The application chart receives the pull request number as a Helm value:

kubriXPreviewPRNumber: "42"

That makes it easy to create deterministic preview URLs:

ingress:
hosts:
- host: team1-my-app-pr-{{ .Values.kubriXPreviewPRNumber }}.apps.example.com

In hub-and-spoke setups, kubriX can also inject the spoke ingress domain so the preview environment is created on the selected target cluster.

Where this fits with Kargo

Preview environments do not replace Kargo.

They sit before the promotion flow.

Kargo remains responsible for controlled promotion between stages such as test, QA, and production. Preview environments are for pull request validation before the change becomes part of the normal staged delivery path.

That gives teams a cleaner workflow:

  1. Open a pull request.
  2. Add the configured preview label.
  3. Review the deployed change in an isolated environment.
  4. Merge when ready.
  5. Promote through Kargo as usual.

Why this matters

Preview environments reduce the cost of finding deployment issues. They also make reviews more concrete: instead of reviewing only YAML and screenshots, teams can inspect the running application.

For platform teams, the feature keeps preview environments standardized. For application teams, it makes pull request validation feel much closer to production delivery without turning every PR into a manual platform request.

That is the kind of feedback loop an internal developer platform should make easy.

Learn more

The feature is documented in the kubriX docs:

Preview Environments

kubriX 7.0.0 - Open Source All the Way Down

· 4 min read
Philipp Achmueller
kubriX Dev, platform enthusiast

Some releases add features, others make bold architectural moves.

kubriX 7.0.0 does both: Replacing two core infrastructure components with better open-source alternatives, leveling up observability, extending high availability across even more services, and making Backstage a more powerful self-service hub.

This is the kind of release that takes courage to ship: real breaking changes, real migrations, real improvements.

🔓 Open Source First: Vault → OpenBao

The headline change in kubriX 7.0.0 is the replacement of HashiCorp Vault with OpenBao.

OpenBao is the community-driven, truly open-source fork of Vault (licensed under MPL 2.0), created after HashiCorp moved Vault to the Business Source License. For kubriX users this means:

  • No license restrictions - use it freely in any environment
  • API-compatible with Vault - familiar workflows, same secrets engine
  • Active community development and security maintenance

🚦 Modern Ingress: Traefik Replaces ingress-nginx

kubriX 7.0.0 ships Traefik as the new default ingress controller, replacing ingress-nginx.

Traefik brings a more cloud-native approach to ingress routing:

  • Native support for dynamic configuration
  • Built-in dashboard (optional) and observability
  • Better integration with the rest of the kubriX stack

A migration runbook is included in the release for existing deployments, and the kubriX support team is available to assist with a smooth transition.

📊 Observability Level-Up

Prometheus Blackbox Exporter - Now Integrated

The prometheus-blackbox-exporter is now a first-class kubriX citizen. Enable it to get out-of-the-box external probing for your services - HTTP, TCP, ICMP and more - with pre-configured dashboards and alerts.

This closes a long-standing gap: you now get both internal metrics and external availability checks from a single platform.

Grafana v11, Tempo 2.0 & More Flexible Alerting

  • Grafana 11.4 with a new plugin syntax and updated dashboards
  • Tempo 2.0 for distributed tracing - with updated configuration options
  • Loki topology improvements and updated scalable deployment
  • More flexible Grafana alert routing: alert rules are now simpler to define and can be fully managed in team-onboarding values - no more scattered configs

k8s-monitoring v4

The underlying Kubernetes monitoring chart has been upgraded to v4, bringing new features, improved stability, and a dedicated prometheus-operator-crds app to cleanly separate CRD management from the monitoring stack.

🎭 Backstage: Permissions, Policies & Better Grafana Integration

kubriX 7.0.0 lays the groundwork for fine-grained Backstage permission control:

  • Permission and conditional policies are now configurable - giving platform teams control over who can do what in the self-service portal
  • The Grafana proxy now uses a sensible internal URL default, reducing manual configuration
  • Global variables for catalog URLs make multi-cluster setups cleaner and less error-prone

🏆 HA for Everything

kubriX Prime customers get high availability extended to even more platform components in 7.0.0:

  • CloudNativePG HA configuration
  • Crossplane and Crossplane provider HA
  • Keycloak HA
  • Kyverno HA
  • External Secrets Operator HA
  • k8s-monitoring v4 HA settings

Combined with the HA work from kubriX 6.0.0, this means virtually every critical platform service now has a production-grade HA configuration available out of the box.

🔄 Other Upgrades

Multi-Cluster & Velero Backup for Spoke Clusters

  • Velero backup is now configurable for spoke clusters - data protection that follows your workloads across every cluster
  • Cluster-specific values files for spoke-applications give more flexibility in managing multi-cluster setups

Mimir & Loki Authentication

Mimir and Loki now support tenant credentials stored in OpenBao - making the observability stack more secure and ready for multi-tenant production environments.

E2E Testing with Playwright

The kubriX Prime CI pipeline now runs Playwright end-to-end tests, catching regressions across the full platform before they ever reach you.

Why kubriX 7.0.0 Matters

This release is about making principled choices:

  • Open-source where it counts → OpenBao over a BSL-licensed Vault
  • Better defaults → Traefik, blackbox probing, HA everywhere
  • Stronger operations → flexible alerting, Grafana v11, Tempo 2.0
  • Enterprise-ready resilience → HA for every Prime component

kubriX 7.0.0 is the most production-ready release we've shipped yet.

Upgrade to kubriX 7.0.0

Already a kubriX Prime customer? kubriX 7.0.0 is available via your Git update channel. Please review the breaking changes and migration runbooks before upgrading, or reach out to kubriX support for guided assistance.

New to kubriX? Let's talk about how to build an internal developer platform that's open, resilient, and ready for production.

kubriX 7.0.0 - open source all the way down. 🚀

When 2 CPUs of “Nothing” Turned Into a Deep Mimir Lesson

· 5 min read
Johannes Kleinlercher
kubriX Dev, platform engineer, systems architect

Debugging Store-Gateway CPU Spikes, GC Thrashing, and a Hidden Memory Limit


The symptom

Our Grafana Mimir store-gateway pods suddenly jumped from ~0.2 CPU to nearly 2 full cores each.

No traffic spike. No deployment. No restarts. No errors.

Just CPU.

This is exactly the kind of issue that sends engineers down rabbit holes — because nothing obvious is wrong.

image


Initial assumptions (all wrong)

When CPU spikes without traffic, the usual suspects are:

  • query surge
  • compactor backlog
  • missing sparse index headers
  • object storage latency
  • throttling or node pressure

All plausible.

None correct.


The turning point: profiling instead of guessing

Instead of chasing hypotheses, we captured a CPU profile directly from a running store-gateway.

Mimir exposes Go’s built-in profiling endpoint, so you can sample real CPU usage without restarting anything.

We ran:

go tool pprof -top store-gateway.cpu.pprof

The result:

runtime.gcBgMarkWorker → ~95%

That means:

The CPU wasn’t busy doing useful work. It was almost entirely doing garbage collection.

At that moment the problem category changed completely.

This was not a load issue. This was a memory behavior issue.


Confirming with a heap profile

Next step: inspect memory.

Heap profile result:

  • ~662 MB live heap
  • ~83% used by index cache structures

This told us two important things:

  1. Memory usage was expected, not a leak.
  2. The cache was working normally.

So why was GC running constantly if memory usage was healthy?


The hidden culprit: GOMEMLIMIT

The answer wasn’t in Mimir code. It was in configuration.

The Helm chart automatically sets:

GOMEMLIMIT = memory_request

Our store-gateway configuration:

resources:
requests:
memory: 512Mi

So Go’s runtime believed:

“I must keep heap usage under 512 MiB.”

But the real working set needed ~660 MB.

That creates a classic GC thrash loop:

heap grows → exceeds limit → GC runs aggressively → CPU spikes → repeat

Nothing was broken. The runtime was behaving exactly as instructed.


Why Kubernetes made this subtle

We hadn’t set memory limits — only requests.

So Kubernetes would happily allow the container to use more than 512Mi.

But Go didn’t know that.

To Go, GOMEMLIMIT is the limit, regardless of Kubernetes policy.

This created a hidden mismatch:

LayerBelieved limit
Go runtime512Mi
Kubernetesunlimited

This kind of cross-layer interaction is where many real production problems live.


The fix

Increase memory request.

We changed:

memory: 512Mi

to:

memory: 2Gi

That automatically raised:

GOMEMLIMIT ≈ 2Gi

Result:

  • GC frequency dropped
  • CPU dropped immediately
  • system stabilized

No code changes. No scaling. No tuning.

Just correct sizing.


Why this happens specifically in store-gateway

Store-gateway is intentionally memory heavy.

It caches:

  • index entries
  • postings lists
  • series metadata

These caches reduce latency and object-store reads.

So high memory usage is expected and desirable.

Trying to force it into a tiny memory footprint simply shifts cost to CPU (via GC).


How to capture a CPU profile from Mimir store-gateway

This is safe to do in production.

1) Port-forward to a pod

kubectl -n mimir port-forward pod/mimir-store-gateway 8080:8080

(your pod names may slightly be different)

2) Download a profile

curl -o cpu.pprof \
http://localhost:8080/debug/pprof/profile?seconds=30

3) Analyze locally

go tool pprof -top cpu.pprof

Most useful commands inside pprof:

CommandPurpose
tophottest functions
top -cumcumulative cost
list funcinspect code path

Flame graph view:

go tool pprof -http=:0 cpu.pprof

How to capture heap profile

curl -o heap.pprof \
http://localhost:8080/debug/pprof/heap

Analyze:

go tool pprof -top -inuse_space heap.pprof

Useful modes:

ModeMeaning
inuse_spacelive memory
alloc_spaceallocation churn
alloc_objectsallocation rate

Reading profiles correctly

Common CPU profile signatures:

PatternInterpretation
runtime.gc* dominatesGC thrashing
syscall dominatesIO bound
crypto/tls dominatesTLS overhead
app code dominatesreal workload

Profiles remove guesswork.


Key lessons

1. CPU problems are often memory problems

If GC dominates CPU, look at heap sizing first.


2. Requests matter more than limits for Go apps

When GOMEMLIMIT is tied to requests, the request effectively becomes the runtime memory ceiling.


3. High memory usage isn’t bad

Caches are supposed to use memory. Starving them just moves cost elsewhere.


4. Profiling > dashboards

Metrics tell you that something is wrong. Profiles tell you what is wrong.


5. Most production mysteries aren’t bugs

They’re interactions between layers:

  • runtime behavior
  • container scheduling
  • Helm defaults
  • caching logic

Understanding those interactions is what distinguishes platform engineers from operators.


Final takeaway

Nothing was broken.

The system behaved exactly as configured.

We just didn’t realize how those configurations interacted.

That’s the real lesson:

Production performance issues are often not failures — they’re misunderstandings.

And the fastest way to resolve them is:

Profile first. Tune second.

kubriX 6.0.0 — Our Christmas Present to Platform Engineers

· 2 min read
Johannes Kleinlercher
kubriX Dev, platform engineer, systems architect

Christmas is the time for good food, time with family, and — if you’re a platform engineer — finally having a moment to breathe while everything just works.

This year, we’re wrapping up something special under the kubriX tree: kubriX 6.0.0, a release focused on high availability, resilience, and platform maturity.

No shiny toy features for one demo.
This is the kind of present you still appreciate in February — when clusters upgrade, nodes disappear, and your platform keeps running.

🎁 What’s in the Box?

🚀 High Availability — Built In, Not Bolted On

kubriX 6.0.0 makes high availability a first-class citizen across the platform.

  • PodDisruptionBudgets to survive node drains and upgrades

  • topologySpreadConstraints to distribute workloads across failure domains

  • Dedicated HA values and topology-aware defaults

🔐 GitOps That Behaves While You’re on Holiday

Argo CD got some serious love in this release:

  • Refactored dashboards for better operational clarity

  • Safer rolling updates in HA mode

  • Clearer permission models for platform teams and application teams

  • Admin-only terminal access for controlled troubleshooting

This means GitOps workflows that stay predictable and boring — which is exactly what you want when you’re not at your desk.

🧠 Cleaner Configuration, Less Mental Overhead

One of the biggest internal gifts in 6.0.0 is the new multi-layer values structure:

  • Clear separation of defaults, environment values, and overrides

  • kubrix-default values to stay DRY and reduce unintended diffs

  • Bootstrap and installer aligned to the same structure

This makes kubriX much easier to operate across multiple cloud providers, clusters, stages, and teams — even after a long year.

🎅 Why kubriX 6.0.0 Matters

kubriX 6.0.0 is not about flashy features — it’s about sleeping better:

  • High availability by default

  • Predictable behavior during upgrades

  • Cleaner configuration at scale

  • Safer GitOps workflows

  • A platform that doesn’t need babysitting

With kubriX 6.0.0, setting up and running an internal developer platform is simpler, more secure, and more scalable than ever.

🎄 Unwrap kubriX 6.0.0

Already a kubriX Prime customer? kubriX 6.0.0 is available automatically via your Git update channel — no action needed.

New to kubriX? Let’s talk about how to build a resilient internal developer platform that doesn’t ruin your holidays.

kubriX 6.0.0 — our Christmas present to platform engineers everywhere. 🎁🚀

Introducing KubriX 5.0 - Scalable, Flexible and Team-Centric Platform Engineering

· 4 min read
Johannes Kleinlercher
kubriX Dev, platform engineer, systems architect

We’re excited to announce kubriX 5.0.0 — a release focused on simplicity, resilience, and better day-2 operations. From a brand-new installer to streamlined observability and smarter pipelines, kubriX 5.0.0 is built to give platform engineers and developers a stronger foundation with less friction.

Here’s what’s new.

What’s New in kubriX 5.0?

🚀 A Brand-New Installer

Installing kubriX just got much easier. Instead of running local scripts on your workstation, you now simply run one kubectl apply command, and the kubriX-installer takes care of the rest — directly inside your Kubernetes cluster.

  • No more fragile workstation dependencies.

  • More stable, reproducible installations.

  • Faster setup for demos, PoCs, or production clusters.

⚡ Smarter Bootstrapping

Bootstrapping kubriX (or kubriX-prime) in your GitOps repo is now part of the installer:

  • Set KUBRIX_BOOTSTRAP=true and provide your DNS provider, domain, and Git repo — and you’re ready in minutes.

  • Out-of-the-box support for AWS, Cloudflare, STACKIT, and IONOS (plus any provider supported by external-dns).

  • Works seamlessly on local Kind clusters for quickstarts or on your real production cluster.

🔒 Stronger Defaults & Security

  • All platform services now dynamically generate admin usernames and passwords — even if you don’t customize them, no one can guess the defaults.

  • Secrets handling and password rotation are documented with clear guides.

  • Default Velero backup schedules for critical Kubernetes resources are included, so you always have a safety net.

📊 Better Observability & Alerting

  • False-positive/negative alerts reduced, and alerting secrets are no longer mandatory for teams that don’t need them.

  • Matrix chat integration added as an Alertmanager receiver — a decentralized, open-source alternative to Slack.

  • Mimir cardinality dashboard integrated, so you can track metric series growth and find bottlenecks.

  • Loki topology switch: from SingleBinary to SimpleScalable, making logs more performant for both small and very large clusters.

🔧 Dependency & Service Improvements

  • Switched from the Bitnami Keycloak chart to the official Keycloak Operator - future-proof and open-source-friendly.

  • Smarter dependency detection shows which helm chart dependencies and container images would really be used.

  • Key platform updates:

    • Kargo 1.7 (with a new openPr flag for approval workflows in pipelines)

    • k8s-monitoring v3 (many improvements, new features, stability)

    • external-secrets v0.17

    • Argo CD v3.1.6

  • Nearly every other platform service has been refreshed for stability and security.

🧪 Testing & Reliability

  • Early integration of Testkube for end-to-end platform testing. We already use it internally, and plan to expand it so you can validate custom platform behaviors in every release.

  • Installer hardened with countless under-the-hood improvements to make it battle-ready for real-world environments.

📚 Documentation Upgrades

  • Full restructure of the documentation, with more details on GRC (Governance, Risk, and Compliance).

  • Expanded guides for secrets management, password rotation, and cluster operations.

  • A stronger knowledge base for platform engineers and developers, updated continuously.

Why kubriX 5.0.0 Matters

  • Faster, safer installations — the installer is cluster-native and resilient.

  • Stronger defaults — no weak passwords, built-in backups, and improved alerting.

  • More observability — dashboards, alerts, and logging at scale are just there.

  • Future-proof upgrades — by aligning with official operators and the latest upstreams.

  • Better knowledge base — documentation that empowers platform engineers, not slows them down.

With kubriX 5.0.0, setting up and running an internal developer platform is simpler, more secure, and more scalable than ever.

Get Started with KubriX 5.0

  • Already a KubriX Prime customer? You’re getting KubriX 5.0 automatically via your Git update channel — no action needed.

  • New to KubriX? Schedule a demo to see how we can accelerate your platform engineering journey.

  • Like what we’re building? ⭐ us on GitHub!

KubriX 5.0 is here — let’s build the next generation of internal platforms together.

Introducing KubriX 4.0 - Scalable, Flexible and Team-Centric Platform Engineering

· 3 min read
Philipp Achmueller
kubriX Dev, platform enthusiast

We’re thrilled to announce the release of KubriX 4.0 — our most flexible and team‑centric version yet!

This upgrade delivers major component refreshes, native vcluster integration, and fine‑grained controls that give platform and application teams more autonomy without sacrificing security.

What’s New in KubriX 4.0?

Next-Gen Core Components

KubriX 4.0 brings the heart of the platform to the latest majors:

  • Argo CD 3.0 brings the new UI, faster diff engine, and improved sharding - we implement tighter RBAC too
  • Grafana 12 a major visual refresh plus query caching for lightning-fast dashboards.
  • Kyverno 1.14 policy exceptions & generate-controls for air-tight supply-chain guardrails..
  • Backstage 1.38.1 faster catalog sync, tighter permissions and dynamic scaffolder secrets into vault

Keeping these giants current means less manual patching and an instant security win.

vCluster Integration & Team Self-Service (Prime)

Need ephemeral clusters for tests, proofs-of-concept, or customer demos? With the new vcluster template you can spin up fully-isolated, cost-efficient virtual clusters inside any host in minutes—complete with KubriX guardrails out-of-the-box. Team Members get admin rights inside shared vcluster while platform engineers keep global policy control.

Smarter Hub & Spoke Onboarding (Prime)

Large organisations rarely have a single prod cluster. The new destinationClusters list inside the onboarding workflow lets you declare which team may deploy to which physical or virtual cluster. No more mis-deployments or ticket ping-pong governance and autonomy in a single YAML stanza.

Quality-of-Life Enhancements

  • ignoreDifferences everywhere – fewer false “Out-of-Sync”s after Argo CD 3.0.
  • Auto-bootstrap of KubriX core into fresh customer repos.
  • Namespace label/annotation presets in the onboarding template for better policy targeting.

Granular Permissions Separation

Building on last versions RBAC overhaul, kubriX v4.0 provides sub-team-level scopes across Argo CD, Vault, Backstage, Kargo and Grafana. You can now:

  • Restrict dashboard editing while still allowing query exploration.
  • Delegate environment-specific Argo CD sync privileges to release engineers.
  • Separate catalog write access from Backstage entity ownership.

Breaking changes you must review

  • Argo CD 2.14 → 3.0: check for removed RBAC verbs and new diff options.
  • Grafana 11 → 12: legacy dashboard JSON v1 IDs are no longer accepted.
  • External-Secrets v0.16+: v1alpha1 resources are now unsupported—migrate or prune, see github.com. (For next Release there will be another change requirement for external-secrets, we will inform with next release accordingly).

Upgrade guides for each component are linked in the release notes — read them before hitting helm upgrade.

Why This Release Matters

  • Stay Ahead of Upstream – Ship on the latest Argo CD, Grafana, Kyverno, Kubevirt & Backstage without spending weeks on migration/testing.
  • Accelerate Team Autonomy – vcluster and destinationClusters unlock safe self‑service while keeping guard‑rails intact.
  • Security by Default – Updated dependencies, tighter policies, and CVE tracking reduce risk across the board.
  • Future‑Proof – 4.0 lays the groundwork for upcoming multi‑cluster rollout orchestration and delivery enhancements

Get Started with KubriX 4.0

  • Already a KubriX Prime customer? You’re getting KubriX 4.0 automatically via your Git update channel — no action needed.

  • New to KubriX? Schedule a demo to see how we can accelerate your platform engineering journey.

  • Like what we’re building? ⭐ us on GitHub!

KubriX 4.0 — Your internal developer platform for faster, smarter, and more secure application delivery.

Introducing KubriX 3.0 - Smarter, Safer, and More Secure Platform Engineering

· 3 min read
Johannes Kleinlercher
kubriX Dev, platform engineer, systems architect

We’re thrilled to announce the release of KubriX 3.0 — our most powerful and enterprise-ready version yet!

This release brings granular RBAC with OIDC integration, automated alerting for Kubernetes issues, and a host of internal security upgrades to streamline platform operations at scale.

What’s New in KubriX 3.0?

Enterprise-Grade Team Isolation with OIDC & RBAC

With KubriX 3.0, we’ve introduced centralized identity and access management across all major platform services. Teams and team members are now onboarded with roles like admin, editor, and viewer, giving them access only to what they need — nothing more.

Team-scoped access now works seamlessly across:

  • Backstage
  • ArgoCD
  • Kargo
  • Grafana
  • Vault
  • MinIO

This helps enforce least-privilege access and keeps environments clean, focused, and secure — whether you're part of a delivery team or the platform team.

Automatic Alerting: Be Informed When It Matters

Stop staring at dashboards. KubriX 3.0 brings integrated Grafana Managed Alerts so teams get notified automatically when common Kubernetes issues occur — from misconfigured workloads to resource bottlenecks.

Each team can customize how and where they want to receive alerts — email, Slack, or any alerting backend. Stay ahead of issues before they impact users.

Improved Supply Chain Security

We’ve also tightened security within the KubriX platform itself:

Our internal CI/CD pipeline now tracks CVE changes for every platform service update — helping prioritize critical patches faster.

Secrets management is smarter: more services now pull secrets securely from Vault — either user-defined or auto-generated and injected via push-secrets.

Always Up-to-Date

KubriX 3.0 ships with the latest stable versions of all core platform services, including:

ArgoCD, Backstage, CloudNative-PG, External-Secrets, Falco-Exporter, Grafana, Ingress-nginx, K8s-Monitoring, Keycloak, Velero, Cost-Analyzer, Crossplane, Loki, PGAdmin4, PostgreSQL, Tempo, Trivy-Operator, KubeVirt, KubeVirt-Manager — and more.

Keeping platform services current is hard. With KubriX, it’s effortless.

Why This Release Matters

  • Stronger Access Controls: Least-privilege principles are enforced by design — boosting security and usability for every team.

  • Proactive Operations: Built-in alerting means fewer surprises and faster recovery times.

  • Secure by Default: From CVE tracking to Vault-based secrets, KubriX 3.0 strengthens your software supply chain.

  • Future-Proof: You’re always running the latest and most secure platform stack — without the manual overhead.

Get Started with KubriX 3.0

  • Already a KubriX Prime customer? You’re getting KubriX 3.0 automatically via your Git update channel — no action needed.

  • New to KubriX? Schedule a demo to see how we can accelerate your platform engineering journey.

  • Like what we’re building? ⭐ us on GitHub!

KubriX 3.0 — Your internal developer platform for faster, smarter, and more secure application delivery.

Introducing KubriX 2.1 – Smarter Automation, Stronger Security, Seamless Scaling!

· 3 min read
Johannes Kleinlercher
kubriX Dev, platform engineer, systems architect

Just one month after our major KubriX 2.0 release, we’re back with another power-packed upgrade: KubriX IDP-Distribution 2.1 is here!

This release brings enhanced automation, improved platform stability, stronger team isolation, and security features that help your application teams move faster — with confidence.

What’s New in KubriX 2.1?

Automation, Automation, Automation

We believe in empowering teams to focus on building, not configuring. That’s why we’ve taken automation to the next level:

  • ArgoCD repo credentials are now created automatically for your team repos.
  • Spoke cluster registration in Vault is fully automated, along with SecretStore creation in each team’s namespace. Teams just need to define ExternalSecret resources — no more manual Vault configuration!

Rock-Solid Stability

We’ve tightened the bolts to ensure your GitOps flows are more robust and predictable:

  • Crossplane health checks are now fully integrated into ArgoCD’s status evaluations.
  • ArgoCD application health checks have been extended to verify complete sync status — especially useful when using sync-waves.

Stronger Team Isolation

Secure, scalable, and clean boundaries between teams are key to platform success. With 2.1, we’re one step closer to full multi-tenancy:

  • Each team now gets dedicated AppSet access tokens, eliminating the need for organization-wide tokens.
  • Vault roles and policies are team-specific, ensuring secrets stay where they belong.
  • Kargo Git credentials are scoped per team, isolating promotion pipelines to their respective repositories.

Sneak peek: KubriX 3.0 will bring even more powerful team isolation features!

Built-In Security

Security shouldn’t be optional—it should be default. KubriX 2.1 introduces:

  • A restructured Kyverno policy architecture
  • The ability to auto-generate deny-all network policies to enforce micro-segmentation

Stay tuned — more default policies are coming in future releases to lock down your platform effortlessly.

Updates Galore

We’ve refreshed the entire KubriX stack with the latest upstream Helm charts, so you’re always running the latest and greatest:

  • falco, grafana, loki, trivy-operator, kargo
  • argo-cd, cert-manager, external-dns, external-secrets
  • k8s-monitoring, cost-analyzer, and more

Why This Release Matters

  • Instant secrets access: Teams can immediately use Vault secrets from spoke clusters—no manual config needed.

  • Improved GitOps reliability: ArgoCD now waits for real readiness before marking apps as healthy.

  • Secure by default: Automated deny-all network policies and scoped permissions reduce blast radius and human error.

  • Frictionless onboarding: New teams and clusters can be onboarded and deployed without platform team intervention.

Getting Started with KubriX 2.1

  • Already a KubriX Prime customer? You’ll receive KubriX 2.1 automatically via your Git update channel — upgrade today!

  • Curious about KubriX? Reach out to us to schedule a demo.

  • Love what we’re building? Show your support with a ⭐ on our GitHub repo!

Experience faster, smarter, and more secure application delivery with KubriX 2.1 — your cloud-native developer platform, reimagined.

Announcing KubriX 2.0 – A Major Leap Forward!

· 2 min read
Johannes Kleinlercher
kubriX Dev, platform engineer, systems architect

We are thrilled to announce the release of KubriX IDP-Distribution 2.0! Following the successful launch of version 1.0 in January, our February release takes the platform to the next level with game-changing features, enhanced automation, and seamless enterprise integration.

What’s New in KubriX 2.0?

Cutting-Edge Platform Updates

KubriX 2.0 brings dozens of updates to the latest and greatest versions of our underlying platform services, including ArgoCD, Crossplane, Grafana, Mimir, Tempo, Vault, Velero, and more. Expect improved stability, performance, and security with this release.

Seamless Hub & Spoke Support for Developers

We’ve made Hub & Spoke topologies first-class citizens in KubriX, simplifying application deployment and team collaboration across different environments.

  • Out-of-the-box support for Hub & Spoke setups across team onboarding, app onboarding, and app delivery workflows.

  • Cluster label-based targeting, allowing you to select target clusters effortlessly.

  • Automatic propagation of cluster-specific information (like ingress domains) to apps, removing the need for developers to handle complex configurations.

ArgoCD SSO with Keycloak – Now Built-In

Identity management just got easier! ArgoCD SSO with Keycloak is now integrated out of the box, ensuring a seamless authentication experience for your teams.

Why Does This Matter?

For enterprises managing multiple clusters and applications, a Hub & Spoke architecture is the gold standard. In KubriX 2.0:

  • The central hub hosts core services like KubriX Delivery, KubriX Observability, and KubriX Portal.

  • The spokes run customer applications and KubriX spoke agents, providing clear separation of concerns and scalable operations.

Without KubriX, deploying applications across multiple environments can mean complex and repetitive configurations. Developers often need to manage tedious details like cluster names, API URLs, and ingress domains manually.

With KubriX 2.0, developers only define app deployment stages (e.g., test → nonprod, QA → nonprod, prod → prod) in their GitOps repo — everything else happens automatically. This removes unnecessary complexity, boosting developer productivity and streamlining delivery pipelines.

And of course, KubriX Observability and Security detect new applications automatically, providing instant insights via Grafana dashboards.

How to Get Started

  • Existing KubriX-Prime customers will receive KubriX 2.0 automatically through their Git update channel and can apply the upgrade today.

  • Interested in KubriX? Contact us to learn more and leave us a ⭐ on our GitHub repo!

Experience faster, smarter, and more efficient application delivery with KubriX 2.0 — your cloud-native developer platform, redefined!