Carbon-Aware Computing: Reduce Emissions Without Rebuilding Everything

Paneles solares con turbinas eólicas al fondo al atardecer representando energía limpia

Carbon-aware computing is the idea that flexible workloads can run when and where energy is cleaner. Grid carbon intensity varies by hour and region — running a batch job at night with high wind emits 3-5x less CO₂ than midday on natural gas. No hardware change, no app rewrite, just smart scheduling. This article covers practical tools and patterns.

The Base Concept

The grid doesn’t have constant carbon:

  • Night with wind: ~50-100g CO₂/kWh.
  • Strong solar day: ~100-200g.
  • Gas peak demand: ~400-500g.
  • Coal zones: >800g.

A workload consuming 1MWh:

  • Clean zone optimal hour: ~50kg CO₂.
  • Dirty zone bad hour: ~800kg.

16x difference changing nothing but timing/location.

Flexibility Types

Not all workloads are flexible:

Time-flexible:

  • Batch processing.
  • ML training.
  • Data warehouse refresh.
  • Backups.
  • Indexing.

Location-flexible:

  • Stateless multi-region workloads.
  • CDN origin.
  • Compute-only tasks (no tight data residency).

Not flexible:

  • User-facing requests.
  • Real-time streaming.
  • OLTP transactions.
  • Control loops.

20-40% of an org’s total compute can be deferrable without user-facing impact.

Data Sources

Main APIs:

  • Electricity Maps: real-time and forecast per region, pay-per-use.
  • WattTime: similar, US focus with historical data.
  • Carbon Aware SDK: from Green Software Foundation, open-source wrapper.
  • Google Cloud: publishes carbon data per region.
  • Microsoft Azure: similar.

Free tier sufficient for experimentation.

Simple Patterns

Delay Batch Jobs

If a backup or ETL can run any time in 24h window:

from carbon_aware import get_cleanest_hour

hour = get_cleanest_hour(region="es-es", window_hours=24)
schedule_job_at(hour, "backup-db")

Trivial change, measurable impact.

Route Workloads Between Regions

Flexible workload can run in:

  • Sweden (hydro, very clean).
  • Ireland (gas, medium).
  • Poland (coal, dirty).
region = get_cleanest_region(candidates=["se-north-1", "ie-west-1", "pl-central-1"])
deploy_to(region)

Useful for ML training or data processing where data can travel.

Scale Based on Carbon

Traditional HPA scales by CPU. Carbon-aware HPA adjusts by carbon:

  • Carbon low: scale up, process more.
  • Carbon high: scale down, defer.

Keda + carbon metric scaler does this.

Kubernetes Integration

Emerging stack:

  • carbon-aware-keda-operator from Microsoft.
  • Custom HPAs based on Prometheus + carbon metric.
  • Kubernetes Scheduler plugins for flexible nodes.

Example with KEDA:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: carbon-aware-workload
spec:
  scaleTargetRef:
    name: my-deployment
  triggers:
    - type: external
      metadata:
        scalerAddress: carbon-aware-scaler.default:8080
        region: eu-west-1
        threshold: "300"  # g CO₂/kWh

Green Software Foundation

Green Software Foundation develops:

  • Carbon-Aware SDK: multi-language library.
  • Software Carbon Intensity (SCI) specification.
  • Training and certifications.

Linux Foundation + Microsoft, GitHub, Accenture, Thoughtworks. Good momentum signal.

Cases with Real ROI

  • Google: goal 24/7 carbon-free energy 2030. Report millions of kg CO₂ avoided annually via scheduling.
  • Microsoft: Xbox downloads updates prioritising low-carbon hours.
  • Meta: data centers with workload shifting.
  • NHS UK: pilots moving night-time MRI processing.

Mid-size company individual scale: 10-30% CO₂ reduction on flexible workloads with minimal effort.

Metrics and Reporting

To track:

  • CO₂ per workload: estimate with cloud APIs + carbon intensity.
  • SCI score: Software Carbon Intensity — g CO₂ per business function.
  • Carbon savings: difference between naive and carbon-aware execution.
  • CSRD report: some cloud providers already report your workload emissions.

Costs vs Benefits

Carbon-aware typically adds no costs:

  • Cloud pricing per hour is uniform in same region.
  • Change is only scheduling logic (trivial code).
  • Added time only if deferring batch causes unacceptable delay.

Benefits:

  • Real, reportable CO₂ reduction.
  • Helps ESG / CSRD compliance.
  • Marketing: genuine green credentials.

Very few cases where costs exceeded benefits, assuming reasonable dev time.

Limitations

Honestly:

  • Data accuracy: forecasts have ±20% error.
  • Regional coverage: Electricity Maps has many regions but not all.
  • Modest savings: typically 10-30% of workloads, not 100%.
  • Greenwashing risk: if only reporting but not significantly reducing.

Competing Ideologies

Two currents:

  • Carbon-aware (flexibility-first): move workloads.
  • Carbon-neutral via offsets: buy credits.

Industry pushes for the former. Offsets are supplement, not replacement.

To Start

Pragmatic path:

  1. Audit: which workloads are flexible and how much they consume.
  2. Prototype: 1-2 jobs moved to low-carbon hours.
  3. Measure: CO₂ before/after with Electricity Maps.
  4. Expand: more workloads, automation with KEDA.
  5. Report: incorporate in sustainability reporting.

3-6 months to mature implementation.

Conclusion

Carbon-aware computing is low-hanging fruit for IT sustainability. Without hardware changes or app rewrites, you can reduce flexible-workload emissions 10-30% with intelligent scheduling. Existing tools (Electricity Maps, KEDA, Carbon-Aware SDK) facilitate implementation. Combined with sustainable-architecture workloads and real renewable-energy commitment, it’s an important piece of the puzzle. In context of CSRD and growing European regulation, it will be expected in years — better to adopt before being forced.

Follow us on jacar.es for more on IT sustainability, Kubernetes, and green software.

Entradas relacionadas