Kubernetes 1.35: what you can already see coming

Logotipo oficial de Kubernetes en versión apilada a color, alojado en el repositorio de artwork de la CNCF, proyecto graduado que mantiene un ciclo trimestral de versiones y del que en enero de 2026 se ultiman los detalles de la 1.35 tras haber liberado 1.34 en agosto de 2025, consolidando una cadencia predecible que facilita a los equipos de operaciones planificar ventanas de actualización con riesgo acotado

Kubernetes 1.34 landed in August 2025 with a handful of important changes in native sidecars, init containers, and the first wave of stabilization for WaitForFirstConsumer in generic-volume mode. The 1.35 cycle has entered feature freeze, preliminary release notes are in the repository, and the stable three-releases-per-year pattern keeps working like clockwork. Time to look at what’s coming in the next version, what deserves real attention, and what we can safely leave for another day.

Context: where we come from in the 1.3x cycle

Since 1.30 the project has prioritized stabilization over novelty, and it shows in the quality of what ships. 1.32 and 1.33 definitively removed old components like CSI migration from in-tree to out-of-tree for the big providers. 1.34 brought native-sidecar support to stable, improved the pod-priority model, and added more control over how the scheduler reacts when a node loses capacity. The net result is that a 1.34 cluster in 2026 is more predictable, less surprising, and easier to operate than a 1.28 cluster from three years ago.

The cultural key is that the project has accepted that for most users Kubernetes is infrastructure, not product. Big philosophy changes are rare; what dominates are incremental improvements in robustness, operator experience, and coverage of edge cases that used to be pain. 1.35 continues that line.

The news that actually matters

The most relevant stabilization in 1.35 is CEL-based admission policies, which had been in beta since 1.30. CEL as a validation language progressively replaces admission webhooks written in code, and that has real operational implications. An admission webhook that goes down blocks the cluster; a CEL expression evaluated inside the control plane has no such risk. For small clusters where every external dependency is one more piece that can fail at three in the morning, replacing webhooks with CEL reduces incidents.

The second piece worth looking at is improved in-place node configuration support via the new KubeletConfigSource API. Until now, changing kubelet configuration meant rotating the node or manipulating system files and restarting the service. With the new API, the control plane can push fresh configuration to the kubelet without restart, opening the door to much cleaner fleet-management operations. For a team with five nodes it’s marginal; for one with fifty it changes how you operate.

The third notable change is generalization of DRA, Dynamic Resource Allocation, landing stable in 1.35 for several accelerator types. For teams running GPU, NPU, or specific-accelerator workloads, DRA replaces the old device-plugin model with something much richer and more expressive. If you don’t have accelerators in the cluster, DRA is transparent. If you do, it’s worth starting to familiarize yourself with the API now.

What stays in beta

Several expected features stay in beta and don’t land stable in 1.35. The most visible is zone-based PodAffinity/PodAntiAffinity, which promises to simplify multi-region HA design but has been in beta since 1.32 and still has edge cases with scheduler under load. It’s not a blocker for production use if you already have it configured, but it’s worth waiting to see if 1.36 stabilizes it before refactoring placement policies.

The second remaining-beta feature is in-place pod resize, the ability to change CPU and memory request of a pod without restart. On paper it sounds very useful, but in practice it has subtle kernel and runtime implications. In 1.35 it remains beta and behind an explicit feature gate; for stable-workload clusters it’s not a priority, and for very dynamic workloads it’s still worth validating in test environments before enabling.

Other pieces in beta cover very specific use cases, like advanced volume-snapshot options or improvements in Taint-Based Evictions behavior. They’re not for the majority; if they affect you, you already know.

Deprecations worth tending

1.35 continues the progressive retirement of old APIs. The key reminder is that if you still have objects with old apiVersion (beta, v1beta1, etc.) for resources stable for several cycles, now is the time to migrate them. The kubectl convert function is still alive and covers most cases; those it doesn’t cover are usually objects that should be rebuilt from scratch because the model changed.

The other deprecation front is kubelet and kube-apiserver flags kept for compatibility. Several will disappear in 1.35 or 1.36. If your Ansible playbooks or configuration management still pass them, now is a good time to clean up. The 1.35 release notes list them explicitly; reading them before planning the upgrade window saves incidents.

How to plan the upgrade

The project’s predictable cadence lets you plan upgrades with confidence. My recommendation for a small or medium production cluster is to always stay one version behind the latest stable: right now that means 1.33 in production and 1.34 in test environments, with 1.35 under observation as soon as the release candidate ships. This one-version lag gives time for first-hour bugs to be found and fixed in minor patches, but not so far back as to lose official support, which covers only the last three versions.

For teams operating multiple clusters, the reasonable pattern is to have one on the most recent version acting as canary. You test there, watch for a couple of weeks, and if all goes well propagate to the others. This discipline avoids surprises in the main cluster and lets you catch specific component-combination problems that don’t appear in synthetic test environments.

The upgrade itself, if you use kubeadm or a managed distribution, is reasonably straightforward. What matters is having read the full release notes, not just the headlines, and having API migrations tested in a test environment before touching production. Serious upgrade problems rarely come from the control-plane binary; they come from custom resources, admission webhooks, volumes with exotic CSI, or networking with CNIs that didn’t keep up.

My reading

Kubernetes 1.35 is one more incremental release in a series of incremental releases, and that’s fine. The project has reached the point where it no longer needs revolutions; it needs to polish what it has and remove edges. The features landing stable in 1.35, CEL for admission, general DRA for accelerators, and the kubelet-configuration API, are useful and cover real cases without breaking anything. Those staying in beta deserve attention but not rushed adoption. The healthy operational pattern is to upgrade with discipline, one version behind the latest, read the release notes seriously, and let others discover the first-day bugs for you.

Entradas relacionadas