Carbon-aware computing: now the default behavior

Diagrama oficial del Carbon Aware SDK de la Green Software Foundation que muestra los tres pilares del software consciente de carbono (eficiencia energetica, eficiencia de hardware y conciencia de carbono) y explica como ajustar tareas a momentos de menor intensidad de la red electrica

When Microsoft and ThoughtWorks launched the Green Software Foundation in 2021, carbon-aware computing was a niche concept that sounded nice in sustainability decks but had little operational traction. Four years later, in September 2025, the situation has shifted quietly but deeply: scheduling non-interactive workloads by grid carbon intensity has become the default behavior across much of the modern stack. This post reviews how we got here and what to understand to make use of it without falling into environmental theater.

What carbon-aware actually means

The core idea is simple. The electricity powering a data center does not have a constant carbon footprint. At 2pm on a sunny day in southern Spain, the grid is full of solar and intensity can be around 80 grams of CO2 per kilowatt-hour. By 8pm on the same day, as the sun sets and gas combined-cycle plants ramp up, intensity rises to 220 grams. The ratio between the best and worst times of day can exceed 3 in many European regions.

Carbon-aware computing exploits this variability to run time-flexible workloads, such as model training, CI builds, backups, or batch data jobs, in low-intensity windows. If a task can wait four hours without consequence and intensity drops by half in those four hours, running it in the clean window cuts emissions proportionally.

There is also a spatial version of the same idea. If your workload can run in any of three cloud regions and one of them currently has a much cleaner grid, running there has the same effect. Azure has offered an electrical-load optimization service since 2023 along those lines, Google Cloud publishes Carbon Free Energy Scores for its regions, and AWS publishes intensity data but has not integrated automatic policies.

What changed in two years

In 2023 the tooling ecosystem was a set of disconnected experimental projects. The Green Software Foundation’s Carbon Aware SDK was useful but required substantial manual integration. ElectricityMaps had a solid API but little adoption. Kepler, the consumption-metrics exporter for Prometheus, had just entered CNCF sandbox.

By September 2025 the landscape is much more integrated. Kepler is a graduated CNCF project with production deployments at Spotify, Red Hat, and several financial institutions. The KEDA Carbon Aware Scaler, launched in 2024, scales Kubernetes workloads automatically based on carbon intensity. GitHub Actions, since June 2025, includes a carbon-aware label on hosted runners that delays scheduled jobs by up to 6 hours if the grid is dirty. GitLab has announced equivalent support for October.

Perhaps most significantly, many platforms have turned it on by default without changing the visible API. Azure Functions transparently redistributes background tasks across regions without user configuration, and has published aggregate data on the emission reductions achieved. This is a symptom of maturity: carbon awareness stops being a premium option and becomes an internal provider tuning.

The hard part: measuring well

The fundamental problem with carbon-aware computing is not implementing it but measuring whether it actually reduces emissions. There are two measurement approaches that give very different answers.

The first is average grid intensity, the number published by bodies like Red Electrica in Spain or ENTSO-E. It is easy to obtain but conceptually questionable: if everyone runs workloads during sunny hours, the grid no longer has excess renewables then and the window disappears. The calculation assumes marginal consumption equals average, which is not true in grids with high variability.

The second is marginal intensity, which measures which power plant starts or stops in response to a demand change. This is the economically correct metric but much harder to compute. WattTime publishes it for U.S. markets and ElectricityMaps started offering it in 2024 for Europe. The difference between average and marginal can be twofold: an hour that looks clean by average intensity can be dirty marginally if any extra kilowatt-hour is covered by gas combined cycle.

For a team doing carbon-aware computing with intellectual integrity, the recommendation is to use marginal intensity where available and to be transparent about which metric is being reported. Emission reductions reported via average intensity tend to be inflated, and serious auditors notice.

Patterns that work and patterns that do not

The pattern that works best is delaying batch workloads with wide tolerance windows. AI model training that can start any time in the next 4 hours, search indexing that can run any day of the week, incremental backups that tolerate hours of delay. For these workloads the emission saving can be 30 to 50 percent with almost no functional impact.

The pattern that does not work is forcing interactive workloads to follow the grid. A web server cannot wait 4 hours for intensity to drop because neither will its users. For service workloads the right lever is not temporal carbon awareness but absolute energy efficiency: profile, optimize consumption per request, use architectures that scale to zero when idle. Mixing these two concepts is a common confusion.

An interesting intermediate pattern is peak smoothing. If your traffic has predictable daily peaks and you can precompute certain results overnight, moving that precomputation to low-intensity hours has a double benefit: it relieves tomorrow’s peak and cuts the precomputation’s emissions. This works well for recommender systems, log analysis, and scheduled report generation.

Incoming regulatory pressure

A factor few teams are watching but will push hard over the next two years is European regulation. The revised 2023 Energy Efficiency Directive requires data-center operators above 500 kilowatts to report consumption, energy mix, and emissions. The next iteration, expected for 2026, will add effective-use metrics, not only installed capacity. This puts pressure on the whole chain, including cloud tenants.

In practice this means cloud providers will start exposing more detailed consumption and emission metrics attributable to each customer. Azure and Google Cloud already do; AWS is still more opaque but will move when regulation bites. For a team planning to report carbon footprint for fiscal year 2026, start collecting these metrics now and testing auditable carbon-aware patterns.

The risk of environmental theater

A real risk of carbon-aware computing is that it becomes theater. A team announces its CI pipeline is carbon aware, earns a green badge in corporate documentation, and the real emission reduction is below 5 percent because most jobs are not delayable. Marketing outpaces substance.

The countermeasure is transparency in the numbers. An honest carbon-aware report includes what proportion of total workloads was effectively rescheduled, the energy-weighted intensity reduction achieved, and the measurement method used. If those three items are not reported, the published figure is probably inflated. Auditors already know this.

My take

Carbon-aware computing has gone from marketing concept to built-in platform capability in a short time. That is good news for the industry’s operational sustainability but demands more rigor from teams. Flipping a checkbox on a CI runner is not decarbonizing a company, and treating it as such confuses sustainability leaders and dilutes the credibility of serious initiatives.

My practical recommendation is to classify workloads by time sensitivity and apply the right tool to each class. Batch workloads with wide windows: enable carbon-aware scheduling and measure the result with marginal intensity. Interactive workloads: profile, optimize, and scale to zero when possible. Hybrid workloads: split the precomputable part to clean hours and keep the online part efficient. Report each separately.

What I find most interesting about this transition is that it is one of the rare software optimizations that combine real environmental benefit with cost savings in many cases. Cleaner electricity is usually cheaper in deregulated markets because renewable surplus is also when wholesale prices drop. A team that schedules well reduces both bill and emissions with the same lever. That alignment of economy and ecology is rare and worth exploiting before it disappears under market saturation.

Entradas relacionadas