Guide

AI Model Deprecation Monitoring

How AI model deprecation works, why it silently breaks applications, and what to do about it.

What is AI model deprecation?

AI model deprecation is the process by which an AI provider — such as OpenAI, Anthropic, Google, or OpenRouter — announces that a specific model version will no longer be supported and will eventually be shut down.

When a model is deprecated, the provider typically announces a shutdown date — a specific date after which API calls to that model will return errors. Before that date, the model continues to work normally. After it, any application calling that model will fail.

Why do AI apps break when models are deprecated?

Most AI-powered applications are built with a specific model ID hardcoded — either directly in the code or in an environment variable. When that model is deprecated, the application continues to call the same model ID until the shutdown date passes. At that point, the API returns an error, and the application breaks.

The problem is compounded by the fact that providers don't push deprecation notifications directly to developers. Announcements are made on documentation pages, changelogs, or developer blogs — places developers rarely check after shipping a product.

How do major AI providers handle model deprecation?

OpenAI

OpenAI publishes a deprecations page listing all models with announced retirement dates. They typically provide 3–12 months notice. Notable deprecations include GPT-4, GPT-4 Turbo, and GPT-3.5 Turbo, all of which affected large numbers of production applications.

Anthropic (Claude)

Anthropic announces model changes through their model documentation. Unlike OpenAI, there is no dedicated deprecation feed — announcements are embedded in release notes and model documentation updates. Claude 2, Claude Instant, and older Claude 3 variants have all been deprecated.

Google Gemini

Google deprecation notices for Gemini models appear across multiple surfaces — the AI Studio documentation, Cloud Console, and Vertex AI release notes. Gemini 1.0 Pro is a recent notable deprecation, replaced by Gemini 1.5 Pro.

OpenRouter

OpenRouter aggregates hundreds of models from multiple providers. Their public models API provides a live list of available models, making it one of the more reliably programmatically-checkable sources.

How to monitor AI models for deprecation

The most reliable approach combines two strategies:

  1. Registry-based monitoring — compare the models your app uses against a maintained list of provider lifecycle data. This catches announced deprecations weeks or months before shutdown.
  2. Runtime checks — periodically make a lightweight API call to verify the model is still responding. This catches unexpected outages or silent retirements that weren't formally announced.

Battlecat combines both approaches and sends email alerts when a model's status changes.

What to do when your AI model is deprecated

  1. Identify the recommended replacement. Providers almost always specify a successor model. In most cases the API is compatible and migration is a single environment variable change.
  2. Test your prompts against the new model. Different model versions can produce different outputs for the same prompt. Test your critical flows before switching in production.
  3. Update and deploy before the shutdown date. Don't wait until the last day. Aim to migrate at least two weeks before the announced deadline.
  4. Set up monitoring for the new model. Once you've migrated, add the new model to your monitoring setup so you're not caught out again.

Monitor your models automatically

Battlecat watches your AI model dependencies and emails you when deprecations are announced.

Check your models for free →
See also: Model Graveyard — a complete archive of every retired AI model.