Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Inteligencia Artificial Metodologías

AI incident postmortems: what they have taught us

AI incident postmortems: what they have taught us

Actualizado: 2026-05-15

Over the last year, more and more teams running AI in production have started publishing detailed incident postmortems. The practice, inherited from classic SRE culture, is consolidating in the new territory of LLMs and agent systems, and the 2025 harvest plus the first months of 2026 now allows for an ordered reading of the patterns that repeat. It’s worth distilling them because many teams are about to make the same mistakes others have documented in detail.

Key takeaways

  • Guardrails need their own periodic synthetic tests verifying end-to-end function, not just component activation.
  • Silent model drift is only caught with a proprietary evaluation bank run regularly.
  • Vendor dependency is not only about availability; it’s about exact model behavior.
  • Classic operational incidents (timeouts, memory leaks, certificates) manifest in novel ways in AI systems.
  • Agents with tool use need their own per-tool rate limits, not just global ones.

Pattern one: silently failing guardrails

The most repeated pattern in recent postmortems is silent guardrail failure. Teams that built systems with input validation, output filtering, prompt-injection detection, and tool-call containment discovered, sometimes months later, that one of those mechanisms had stopped working without generating an alert. The typical pattern is a base-model update by the provider slightly changing behavior, breaking the guardrail’s heuristic, and nobody finds out because the observable metric doesn’t visibly change.

A documented case involved a customer-support system where the output PII filter depended on regex detection assuming certain response formats. When the provider updated to a version slightly reformatting some outputs, the regex began letting sensitive information through for weeks. Final detection came via external audit, not the monitoring system.

The lesson: guardrails need their own periodic synthetic tests verifying end-to-end function, not just component activation. Mature teams have introduced guardrail tests injecting known adversarial inputs at regular intervals and verifying the filter keeps blocking them.

Pattern two: silent model drift

Another recurring pattern is what several postmortems call silent drift. The base model, run by an external provider, changes behavior subtly without the team detecting until a sharp user reports it. Changes can be style, response length, tolerance of certain input types, or real capability on complex tasks. Rarely catastrophic, but they degrade system quality during the time they pass unnoticed.

A postmortem published by a medical-assistant company describes exactly this phenomenon. For roughly six weeks, response accuracy on a subset of clinical questions degraded gradually after a minor model update. The system kept working, users hadn’t changed use patterns, and basic availability metrics showed nothing. Only by introducing an automated evaluation question bank with expected answers did the team detect the regression.

The lesson: any production system with an external model needs its own evaluation bank run regularly. Without this mechanism, the team depends on provider goodwill for notification of relevant changes. For how to build that bank, see production agent evaluations.

Pattern three: hidden vendor dependency

Several postmortems have highlighted how teams who believed they had manageable model-provider dependency discovered, during an incident, the dependency was much deeper than assumed. A particularly instructive case happened when a provider had a prolonged outage and a team that had designed failover to an alternative provider discovered their prompt system was so tuned to the specific behavior of the down model that the alternative model produced significantly worse results.

The dependency wasn’t only availability; it was exact behavior. Prompts, interaction patterns, format expectations, and evaluation criteria had evolved over months to fit a specific model’s peculiarities.

The lesson here is twofold:

  1. Regularly test failover with real traffic, not just verify that pipes are connected.
  2. Design the system to work reasonably well with at least two different models from the start, imposing less-specific-peculiarity prompt discipline.

Pattern four: classic operation worsened by novelty

A considerable share of recent postmortems aren’t actually AI incidents, but classic operation incidents manifesting in novel or delayed ways because the AI layer masked signals:

  • Memory leaks in workers processing large inputs.
  • Database connection problems.
  • Expired certificates.
  • Poorly coordinated deployments.
  • Rotated secrets not updated.

A concrete case involved a very short timeout configured on the external-model client. During normal conditions it worked, but during provider high-load moments, timeouts triggered retries saturating internal resources and generating an error cascade.

The lesson: systems with AI components aren’t a separate category from reliability engineering. Principles of observability, fault containment, defense in depth, and systematic learning remain valid. Circuit breakers, exponential-backoff retries, specific external-API call monitoring, nothing conceptually new, but many teams are relearning these lessons in the AI context.

Pattern five: tool use with unexpected effects

Agent systems with tool use have produced their own particularly interesting postmortem category. The typical pattern is an agent that, under normal conditions, invokes external tools reasonably, but under certain adversarial or unexpected inputs enters loops, invokes tools with harmful parameters, or combines several tools in sequences with unforeseen side effects.

A case documented in a postmortem involved an agent with access to an email-sending API that, after a specific input, invoked the tool repeatedly sending hundreds of emails to users before the external rate limit broke the chain. The immediate lesson was that agents need their own rate limits per tool, not just at the global system.

Another more general lesson: every tool accessible to the agent needs its own explicit threat model. It’s not enough to think of the system as a whole. This lesson is also central to enterprise agent governance.

Practices mature teams are adopting

From the accumulation of postmortems, several concrete practices are consolidating:

  • Continuous synthetic evaluations against reference banks, for both verifying base-model behavior and testing guardrails and tools.
  • Clear separation between infrastructure, model, and product metrics, with dashboards correlating incidents across the three layers.
  • AI-specific incident-response procedures, with runbooks covering scenarios like evaluation-detected model drift, external-provider saturation, guardrail failure, anomalous agent behavior.
  • Provider contracts including clauses on communication of relevant changes, SLAs differentiated by criticality, and access to model-behavior metrics.

My reading

Postmortem culture in AI systems has matured noticeably. The engineering community now has a documented-case corpus sufficient to learn without having to make each mistake for the first time, and teams systematically reading these postmortems are clearly better prepared.

The most important transversal lesson is that AI in production is reliability engineering applied to new components, not a completely different discipline. Teams applying classical principles rigorously, observability, fault containment, defense in depth, systematic learning, have fewer incidents and better postmortems. No shortcuts: what worked for decades for critical systems keeps working, only now there are more components requiring specific attention.

Was this useful?
[Total: 10 · Average: 4.6]

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.